Risk Identification Methods In Software Testing

Risk Identification Methods In Software Testing
December 05 10:00 2011 Print This Article

Fault distribution is very uneven for the majority of software, not depending on their size, functionality, implementation language and other features.

Much empirical evidence has accumulated over the years to support the so-called 80:20 principle. It states that 20% of the software elements are answerable for 80% of the troubles.

Such problematic elements may commonly be described by specific estimation properties about their design, size, complexity, change history.

Because of the uneven fault distribution among software elements, there is a huge need for risk identification methods to analyze these estimation data so that inspection, software testing and other quality assurance activities can be concentrated on such potentially high-defect elements.

There are several risk detecting methods:

  • tree-based modeling
  • traditional statistical analysis methods
  • neural networks
  • learning algorithms
  • pattern matching methods
  • principal component and discriminant analysis

These methods can be described by such features as:

  • exactness
  • presence of tool support
  • ease of result interpretation
  • simplicity
  • stability
  • creative info
  • early presence
  • manual for quality betterment

Correct risk detecting methods may be picked to fit specific application environments with the goal to detect high-risk software elements for focused inspection and software testing.

Related Posts:

About Article Author

view more articles
Nataliia Vasylyna
Nataliia Vasylyna

View More Articles


write a comment

No Comments Yet!

You can be the one to start a conversation.

Add a Comment

Your data will be safe! Your e-mail address will not be published. Also other data will not be shared with third person.
All fields are required.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.