Test coverage is a measure of the degree to which a test exercises some feature(s) or code.
Test coverage relates the tests produced to the software or features under test such that we can estimate:
1. The degree to which a test exercises the software or features.
2. The software or features which are insufﬁciently exercised.
3. Given the number of bugs found and the test coverage, the conﬁdence we have in the system’s attributes at any moment.
4. The minimum number of tests which need to be run to provide some level of conﬁdence in the quality of the system.
Test coverage is another example of small boys with hammers to whom everything appears to be a nail. Thus there are many papers published, full of good advice on code coverage in unit testing.
Is your test coverage getting better, or worse? There are several ways of measuring it.
1. Test coverage by feature. The speciﬁcation says the system has the following n features plus start-up and shut-down. Do we have a test (set) for every feature plus start-up and shut-down?
2. Test coverage by GUI icon. The user interface has a number of screens, buttons, pull-downs, tabs, menus, etc. Do we have them all listed, and do we have tests which execute every one?
3. Test coverage by instrumentation. Use a code instrumentation tool to instrument a build, and then test that build using the system tests already prepared. The tool output should be able to indicate how much coverage in code terms the system had. Note that this need not occur for every build once sufﬁcient code coverage is assured.
4. Test coverage by structure. When unit testing you need to be sure you have exercised some minimum part of the code. Testing should include Statement coverage, Decision (branch) coverage, Condition coverage, All-DU-paths coverage, and Linear Code Sequence and Jump (LCSAJ). Beware of anyone claiming “code coverage” when all they are doing is running Ncover when building: they may have ﬁltered out unexercised lines and will only have exercised at best all the statements in the unit. Decision, branch, DU-paths, etc., will probably not have been covered.
5. Test coverage by scenario. Users have a number of goals which they want to achieve. They achieve them using a number of (parts of) features. In doing so they set up subtle feature interactions, which no other coverage approach will mimic. Use user action logs (if necessary) to validate your proposed scenarios, and user proﬁles to identify scenario sets. Naturally use cases form the baseline of such an approach.
6. Test coverage by transition. Typically on web applications, but also in more conventional applications, there is a number of “paths” a user may take to achieve a goal. These paths need to be identiﬁed possibly in the form of a state transition diagram (typically from URL to URL in the case of a web test) such that a minimum number of paths can be identiﬁed and traversed. This is something of a hybrid of test coverage by structure and test coverage by scenario, and is invaluable when testing web applications.
7. Test coverage by web script, web page, application, and component. Having identiﬁed the risk level of the website, you can then decide the level of coverage necessary to mitigate that risk by selecting the test types.
Why does this matter? Because you want to minimize the risk of not having covered all the possibilities of having a fatal bug in the released system. Test coverage cannot be complete, any more than requirements
speciﬁcations can be. But you can make a good engineering decision on what sort of test coverage you need based on the risk the system poses.
The coverage type(s) you choose (should) relate to the probability of ﬁnding bugs, and thus to the degree to which you are minimizing the risk the product poses.