How do You Know When to Stop Testing?

by Nataliia Vasylyna | August 13, 2011 10:00 am

Note: the article was updated in August 2018.

How many test runs are needed? How to understand when to stop testing? The most obvious answer that comes to mind is “when all the bugs are found”. Really?

But the matter is that this may never happen. No matter how good a QA team[1] is or how much time they have spent on the testing process, new bugs appear again and again. Still, sooner or later, the decision to stop testing must be made. But how not to mistake? Are there any rules to follow?

Indeed, there are many concerns about that. They differ according to the peculiarities of each project. We’re going to cover the most common of them.

Time for testing comes to the end

When the time for testing is running out, it is definitely a moment to cease QA processes. By this time, usually, the testing team has already executed the majority of test cases and detected the most critical bugs. Sometimes it is possible to postpone the release due to unfinished testing. Mostly it happens when the risk of omitting crucial bugs is too high.

A certain test coverage is provided

In general, the testers try to ensure as wide test coverage[2] as possible. But they have limited time and budget allocated for the QA stage. For example, during mobile application testing[3], a QA team cannot run tests on all the existing devices because their number is huge. In this case, a certain rate of test coverage is defined, e.g., 90% (the division of test cases covered by the general amount of test cases).

Budget is exhausted

A project budget has a limit. A minor unfound bug may cost less than additional test runs. Still, the situation can be quite opposite. So, this question requires a good consideration in order to minimize potential spendings.

Rate of failed test is acceptable

Some test cases are passed, and some are failed. If not to talk about severe and crucial bugs, you can identify a number of failed test cases which won’t affect the quality of the product. For example, 7% of test cases can fail, but you should discover the bugs of a low priority. This approach is a kind of a compromise between quality and price.

Functional testing is completed

Of course, functional testing[4] is one of the most important ones. Undetected functional bug costs much more than, for example, syntactical one. In case of limited resources, it is rational to run functional testing first and then decide whether to continue or to stop. But this approach is dangerous as far as security or performance testing may detect really severe issues.

Too many severe bugs and planned modifications

If the product is all over scattered with different bugs and problems, there is a question whether it is rational to fix all of them. This approach is also called “the Dead Horse Heuristic”. In this case, owners of the product often choose to make some great modifications or develop some parts from scratch instead of fixing the issues.

So, how to know when to stop software testing? The good practice is to combine several of the mentioned-above practices and determine when testing should be finished in the test plan.

Learn more from QATestLab

Related Posts:

Endnotes:
  1. QA team: https://blog.qatestlab.com/2019/05/07/building-qa-team/
  2. test coverage: https://blog.qatestlab.com/2016/06/13/insufficient-test-coverage/
  3. mobile application testing: https://qatestlab.com/solutions/by-focus-area/mobile-applications/
  4. functional testing: https://qatestlab.com/services/manual-testing/functional-testing/
  5. Myths and Facts: Purpose of Software Testing: https://blog.qatestlab.com/2011/05/06/myths-and-rakes-the-purpose-of-testing-is-to-find-errors/
  6. How important is Software QA Testing?: https://blog.qatestlab.com/2011/03/01/how-important-is-software-qa-testing/
  7. Software Development (Doesn’t) Need Independent QA: https://blog.qatestlab.com/2024/11/14/software-development-doesnt-need-independent-qa/

Source URL: https://blog.qatestlab.com/2011/08/13/how-do-you-know-when-to-stop-testing/