by Nataliia Vasylyna | August 13, 2011 10:00 am
Note: the article was updated in August 2018.
How many test runs are needed? How to understand when to stop testing? The most obvious answer that comes to mind is “when all the bugs are found”. Really?
But the matter is that this may never happen. No matter how good a QA team[1] is or how much time they have spent on the testing process, new bugs appear again and again. Still, sooner or later, the decision to stop testing must be made. But how not to mistake? Are there any rules to follow?
Indeed, there are many concerns about that. They differ according to the peculiarities of each project. We’re going to cover the most common of them.
When the time for testing is running out, it is definitely a moment to cease QA processes. By this time, usually, the testing team has already executed the majority of test cases and detected the most critical bugs. Sometimes it is possible to postpone the release due to unfinished testing. Mostly it happens when the risk of omitting crucial bugs is too high.
In general, the testers try to ensure as wide test coverage[2] as possible. But they have limited time and budget allocated for the QA stage. For example, during mobile application testing[3], a QA team cannot run tests on all the existing devices because their number is huge. In this case, a certain rate of test coverage is defined, e.g., 90% (the division of test cases covered by the general amount of test cases).
A project budget has a limit. A minor unfound bug may cost less than additional test runs. Still, the situation can be quite opposite. So, this question requires a good consideration in order to minimize potential spendings.
Some test cases are passed, and some are failed. If not to talk about severe and crucial bugs, you can identify a number of failed test cases which won’t affect the quality of the product. For example, 7% of test cases can fail, but you should discover the bugs of a low priority. This approach is a kind of a compromise between quality and price.
Of course, functional testing[4] is one of the most important ones. Undetected functional bug costs much more than, for example, syntactical one. In case of limited resources, it is rational to run functional testing first and then decide whether to continue or to stop. But this approach is dangerous as far as security or performance testing may detect really severe issues.
If the product is all over scattered with different bugs and problems, there is a question whether it is rational to fix all of them. This approach is also called “the Dead Horse Heuristic”. In this case, owners of the product often choose to make some great modifications or develop some parts from scratch instead of fixing the issues.
So, how to know when to stop software testing? The good practice is to combine several of the mentioned-above practices and determine when testing should be finished in the test plan.
Source URL: https://blog.qatestlab.com/2011/08/13/how-do-you-know-when-to-stop-testing/
Copyright ©2024 QATestLab Blog unless otherwise noted.