In the case if defects are reported by clients during normal operations or during beta testing, diagnosis testing is usually used to help with the problem diagnosis by recreating the defect, monitoring program conduct, gathering all necessary data, and diagnosing the software bugs by analyzing all the collected data.
Diagnosis testing may be helpful in finding the precise location of the underlying faults in the software so that they can be corrected.
A series of test runs may be implemented in succession to progressively narrow down the possible defects.
Consequently, highly correlated test runs based on equal scenarios are performed, which is different from the normal testing where a wider diversity of usage scenarios is used.
Diagnosis testing may also be used to forecast software bugs detected at the time of software testing.
Nevertheless, they can be used less largely than the diagnosis testing for in-field defects reported by the clients.
The main difference between these situations is information availability. For in-house testing, all the test cases can be provided by software testers for developers to forecast the defects. Nevertheless, actual clients are normally less ready to share detailed usage scenarios and detailed data when defects were found.
Consequently, developers rely more on diagnosis testing to receive more information for analyzing the in-field defects.
Commonly the more data we can receive from specific testing the less our reliance on diagnosis testing.
Diagnosis testing can also be used to operate with other software bugs detected through other quality assurance activities.
That is why diagnosis testing is a weighty testing activity that cut through a lot of related testing, usage, and quality assurance activities.