Code coverage
kohd KUV-er-ij
A metric measuring what percentage of source code is executed by automated tests.
Code coverage tells you what percentage of your code runs when you execute your test suite. If your tests exercise 80% of your code, you have 80% code coverage. The remaining 20% is untested: paths that have never been verified by automation.
Coverage is measured in several ways. Line coverage counts the percentage of lines executed. Branch coverage counts the percentage of if/else branches taken. Function coverage counts the percentage of functions called. Branch coverage is the most meaningful because a single untested branch can hide a critical bug, even if the line count looks good.
Coverage is a useful signal but a dangerous goal. Teams that chase 100% coverage often write pointless tests that verify implementation details rather than behavior. A test that checks "did this function get called?" adds coverage but not confidence. The right approach is to aim for high coverage (85-95%) through behavior-driven tests that verify what the code does, not how it does it. If you test all the business behaviors, coverage follows naturally.
Examples
A team discovers low coverage hides critical bugs.
The codebase has 45% test coverage. The payment processing module, the most critical code in the system, has 12% coverage. A refactoring introduces a bug that charges customers twice for failed retries. It is not caught because the retry logic has zero test coverage. After the incident, the team prioritizes testing payment flows and reaches 95% coverage on the payment module within two sprints.
A team uses coverage to find untested code paths.
The coverage report shows that the error handling in the API gateway is never tested: every test uses valid inputs. The team writes tests with invalid inputs, expired tokens, malformed JSON, and missing fields. They discover three error handlers that return stack traces to the client instead of safe error messages. Coverage increased, and real bugs were found.
A developer writes high-coverage but low-value tests.
The developer achieves 98% coverage by testing getters, setters, and trivial constructors. The complex business logic in the pricing engine has coverage because the tests call it, but the assertions only check that 'it does not throw.' The tests verify nothing meaningful. A pricing bug ships to production. The team revises their testing standards: every test must assert on a business outcome.
In practice
Read more on the blog
Frequently asked questions
What is a good code coverage percentage?
80-90% is the sweet spot for most projects. Below 70%, you likely have significant untested code paths. Above 95%, you are probably writing tests for trivial code to hit a number. More important than the overall percentage is which code has coverage. 85% overall with 95% on critical business logic is better than 95% overall with 60% on the payment module. Focus coverage on the code that matters most.
Does 100% code coverage mean no bugs?
No. Coverage tells you that code was executed during testing, not that it was tested correctly. A test that calls a function but does not check the return value adds coverage without adding confidence. You can have 100% coverage and still miss bugs from incorrect assertions, missing edge cases, or integration issues between components. Coverage is necessary but not sufficient for quality.
Related terms
An automated test that verifies a small, isolated piece of code behaves correctly.
An automated test that verifies multiple components or services work correctly together.
Continuous integration and continuous deployment: automating code testing and delivery to production.
The practice of having other developers examine code changes before they are merged.
The accumulated cost of shortcuts and deferred work in a codebase that slows future development.

Want the complete playbook?
Picks and Shovels is the definitive guide to developer marketing. Amazon #1 bestseller with practical strategies from 30 years of marketing to developers.