Software Testing

February 25 09:24 2011 Print This Article

Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software. The difficulty in software testing stems from the complexity of software.

Software: we cannot completely test a program with moderate complexity

Testing:  It is the process of executing a program with the intent of finding errors.

The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality.


  • Introduction
  • Key Concepts
  • Taxonomy
  • Testing Computerization
  • When to stop testing?


Software Testing is the process of executing a program or system with the intent of finding errors. Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. Generally, software can be unsuccessful in many peculiar ways.

Most of the defects in software are design errors, not manufacturing defects. Once the software is shipped, the design defects or bugs will be obscured in and remain dormant until activation. Software bugs will almost always exist in any software module with moderate size: not because programmers are careless or irresponsible, but because the complexity of software is generally inflexible and humans have only limited ability to manage complexity. Software and any digital systems are not continuous, testing boundary values are not satisfactory to guarantee correctness.

All the possible values need to be tested and verified, but complete testing is infeasible. Exhaustively testing a simple program to add only two integer inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years, even if tests were performed at a rate of thousands per second. If inputs from the real world are involved, the problem will get worse, because timing and unpredictable environmental effects and human interactions are all possible input parameters under consideration.

A further complication has to do with the dynamic nature of programs. If a failure occurs during preliminary testing and the code is changed, the software may now work for a test case that it didn’t work for previously. But its behavior on pre-error test cases that it passed before can no longer be guaranteed. To account for this possibility, testing should be restarted. The expense of doing this is often excessive.

An interesting analogy parallels the difficulty in software testing with the pesticide, known as the “Pesticide Paradox”. Software complexity (and therefore that of bugs) grows to the limits of our ability to manage that complexity. Society seems to be unwilling to limit complexity because we all want that extra bell, whistle, and feature interaction. Thus, our users always push us to the complexity barrier and how close we can approach that barrier is largely determined by the strength of the techniques we can wield against ever more complex and subtle bugs. Testing is an integral part in software development. It is broadly deployed in every phase in the software development cycle. Typically, more than 50% percent of the development time is spent in testing. Testing is usually performed for the following purposes:

  • To improve quality
  • For Verification & Validation (V&V)
  • For reliability estimation

To improve quality:

Bugs can cause huge losses. Bugs in critical systems have caused airplane crashes, allowed space shuttle missions to go awry, halted trading on the stock market, and worse. Bugs can kill. Bugs can cause disasters. In a computerized embedded world, ‘the quality and reliability’ of software is a matter of life and death.

Quality means ‘the conformance to the specified design requirement’. Being correct, the minimum requirement of quality, means performing as required under specified circumstances.

Debugging, a narrow view of software testing, is performed heavily ‘to find out design defects by the programmer’. The imperfection of human nature makes it almost impossible to make a moderately complex program correct the first time. Finding the problems and get them fixed Is the purpose of debugging in programming segment.

In general the characteristics of product can be divided into three types.

  • Operational characteristics.
  • Transition characteristics.
  • Revision characteristics
  1. Operational characteristics:

Operational characteristics specify the requirements during the operation/usage. It can be divided as

Correctness: The extent to which the software meets the specification.

Usability: The time required to learn the usage of the software. The usability of well-designed GUI is very high compared to a menu-driven interface.

Integrity: The software should have side effects (like another application does not run when the software is invoked).

Efficiency: The effective usage of storage space, faster execution time etc.

Reliability: The software should be defect-free and should not fail during the operation.

Safety: The software should not be hazardous to environment/life.

Security: The software should not have ill-effects on data/hardware.

Transition characteristics:

Transition characteristics specify the requirements for its usage in other hardware/Operating System environments. It can be divided as

Portability: Ease with which software can be transfered from one platform to another (for example, from windows NT to unix) without changing the functionality.

Reusability: If portions of the software can be used in some other application with little or no modifications, the software is said to be Reusable.

Interoperability: Ability to make the software exchange information with another system & make use of the information transparently.

  1. Revision characteristics:

Revision characteristics specify the requirements for making the change to software easy. It can be divided as

Maintainability: Ease to maintain so that time required to remove defects in the software is minimal.

Testability: Ease to test so that time required to test the software is minimal.

Flexibility: Ease to make modifications so that time required for making modifications are minimal.

Scalability: Ease to increase the performance of the software if the application demands it.

Example: Database application gives good response time for 10 users should be scalable for 100 users if required.

Extensibility: The ease with which the functionality of the software can be enhanced.

Modularity: If the software is divided into separate independent parts (called modules) that can be modified, & tested separately, it has high modularity.

For Verification & Validation (V&V):

Testing is heavily used as a tool in the V&V process. Testers can make claims based on interpretations of the testing results, which either the product works under certain situations or it does not work. We cannot test quality directly, but we can test related factors to make quality visible. Quality has three sets of factors – ‘functionality, engineering, and adaptability’. These three sets of factors can be thought of as dimensions in the software quality space. Each dimension may be broken down into its component factors and considerations at successively lower levels of detail. Table 1 illustrates some of the most frequently cited quality considerations.

For reliability estimation:

Software reliability has important relations with many aspects of software, including the structure, and the amount of testing it has been subjected to. Testing can serve as a statistical

sampling method to gain failure data for reliability estimation. Software testing is not mature. We are still using the same testing techniques invented 20-30 years ago, some of which are crafted methods or heuristics rather than good engineering methods. Software testing can be costly, but not testing software is even more expensive, especially in place that human live is at stake. Solving the software-testing problem is no easier than solving the Turing halting problem.

Purpose Of Testing:

1.To check for the existence of defects or errors of a program or projects based upon some predefined instructions.

2. To improve the quality of product.

3. To prove that the software has no works and prove that the quality is true.

4. To provide a bug free software.

5. Reduce the number of bugs in the code.

Goals For Testing:

There are two types of goals of testing.

1. Bug Prevention.

2. Bug Discovery.

Testing and test design, as parts of quality-assurance, should also focus on bug prevention. Test & test design do not prevent bugs, they should be able to discover symptoms caused by bugs. Finally, tests should provide clear diagnoses so that bugs can be easily corrected. ‘Bug prevention is testing’s first- goal’. A prevented bug is better than a detected and corrected bug because if the bug is prevented, there’s no code to correct. More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded-indeed, test-design thinking can discover  and eliminate bugs at every stage in the creation of software, from  conception to specification, to design, coding, and the rest.

Phases Of Testing:

There are five phases of testing.

  • Phase 0: Thinking.
  • Phase 1: Thinking-The Software Works.
  • Phase 2: Thinking- The Software Doesn’t Works.
  • Phase 3: Thinking- Test for Risk Reduction.
  • Phase 4: Thinking- A State of Mind.

Phase 0: Thinking

There’s no difference between testing and debugging. Other than in support of debugging, testing has no purpose. In phase 0- thinking  testing equals to debugging.

Phase 1: Thinking- The Software Works

Phase1 thinking represented progress because it recognised the distinction between testing and debugging. The objective of this phase 1 thinking is unachievable.

Phase 2: Thinking- The Software Doesn’t Works

The difference between Phase1 and 2 thinking is illustrated by analogy to the difference between bookkeepers and auditors. The bookkeeper’s goal is to show that the books balance, but the auditor’s goal is to show that despite the appearance of balance, the bookkeeper has embezzled. Phase 2 thinking leads to strong, reveling tests.

Phase 3: Thinking- Test for Risk Reduction:

In this phase, if a test is passed, then the product’s quality does not change, but our perception of that quality does change. Testing, pass or fail, reduces our perception of risk about a software.

Phase 4: Thinking- A State of Mind:

Testing is not an act, it is a mental discipline, that results in low risk software without much testing effort.


The various dichotomies are as follows,

  1. Testing versus Debugging.
  2. Function versus Structure.
  3. The Designer versus The Tester.
  4. Modularity versus Efficiency.
  5. Small versus Large.
  6. The Builder versus The Buyer.


There is a plethora of testing methods and testing techniques, serving multiple purposes in different life cycle phases. Classified by purpose, software testing can be divided into: correctness testing, performance –testing, reliability testing and security testing. Classified by life-cycle phase, software testing can be classified into the following categories: requirements phase testing, design phase testing, program phase testing, evaluating test results, installation phase testing, acceptance testing and maintenance testing. By scope, software testing can be categorized as follows: unit testing, component testing, integration testing, and system-testing.

Correctness Testing:

Correctness is the minimum requirement of software, the essential purpose of testing. The tester may or may not know the inside details of the software module under test, e.g. control flow, data flow etc. Therefore, either a white-box point of view or black-box point of view can be taken in testing software. We must note that the black-box & white-box ideas are not limited in correctness testing only.

  • White-box Testing:

White-box testing deals with the internal logic and Structure of the program code. It also called as “Glass Box”.  The different methods used to perform white-box testing as follows,

1. Statement Testing.

2. Decision Testing.

3. Condition Testing.


  1. The application is effective because of the internal knowledge of the program code.
  2. The code is optimized.


1.  White-box testing is very expensive to perform since the tester should have the knowledge about the internal structure.

2.  It is not possible to examine every unit for detecting hidden errors which results in application disasters.

  • Black-box Testing:

Black-box testing is done without the internal knowledge of the program code. The different methods used to perform Black-box Testing as follows,

1.Expected Inputs.

2. Boundary Values.

3. Illegal Values.


  1. Black-box testing is more effective than White-box testing.
  2. Tester requires only the specification of programming language.


  1. Black-box testing takes long time to test each and every input.
  2. Test cases are difficult to design without clear and understandable specifications.

Consequences Of Bugs:

The consequences of bugs can range from mild to Catastrophic. The programmers write programs for humans and hence the bug consequences should be calculated in terms of human only rather than in terms of machine.

The various bug consequences are as follows

  1. Mild.
  2. Moderate.
  3. Annoying.
  4. Serious.
  5. Disturbing.
  6. Very Serious.
  7. Extreme.
  8. Intolerable
  9. Catastrophic.

10.  Infectious.

Different Kinds Of Bugs:      There are different kinds of bugs they are

1. Requirements, Feature and Functionality Bugs.

2. Structural Bugs.

3. Coding and Documentation Bugs.

4. Data Bugs.

5. Interface Bugs.

6. Integration Bugs.

7. System Bugs.

8. Test Bugs.

Testing automation:

Software testing can be very costly. Automation is a good way to cut down time and cost. Software testing tools and techniques usually suffer from a lack of generic applicability and scalability. The reason is straight-forward.

When To Stop Testing?

Testing is potentially endless. We cannot test till all the defects are unearthed and removed it is simply impossible. At some point, we have to stop testing and ship the software. The question is when. Realistically, testing is a trade-off between budget, time and quality. The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from continuing testing cannot justify the testing cost.

Related Posts:

  • No Related Posts
  Article "tagged" as:

About Article Author

view more articles
Nataliia Vasylyna
Nataliia Vasylyna

View More Articles
  • limewire free download

    trying to find you on facebook