by Nataliia Vasylyna | February 8, 2012 10:00 am
There are a lot of projects where all performance testing[1] are conducted on the live environment. On the other hand, we can say that this is rather uncommon and adds its own set of additional considerations such as:
Such type of software testing is generally scheduled out of standard working hours in order to belittle external effects on the test results and the influence of the testing on the live environment.
To make a long story short you should try to make the performance test environment as close to a replica of the live environment as it possible within existing constraints. This requirement differs from unit testing, where the emphasis is on ensuring that the application works accurately. The misinterpretation often persists that a minimalist deployment will be appropriate for both performance and functional testing[2].
For instance, one significant U.K. bank has a test lab setup to replicate one of their biggest single branches. This environment includes more than 150 workstations, each of them configured to represent a single teller, with all the software that would be segment of a general desktop build. On top of this is deployed test automation[3] software providing an accurate modeling environment for functional and performance testing projects.
Setting up a performance test environment is seldom a trivial task, and it may take a lot of weeks or even months to achieve. Consequently, you need to plan for a realistic amount of time to complete this activity.
Having a complete comprehension of the total test environment at the outset makes impossible more effectual test design and planning and helps you to recognize testing challenges in the project as soon as possible.
Sometimes it happens so that this process must be revisited from time to time through the life cycle of the project.
To sum up, there are three levels of preference in designing a test environment:
Perhaps it is the most prevalent case, where the test environment is adequate to deploy the application but the number, tier deployment, and specification of servers differ emphatically from the live environment.
This is often reachable —the significant consideration being that from a bare-metal perspective, the specification of the servers at every tier must match that of the live environment. This permits precise evaluation to be made of the capacity limits of individual servers providing a reliable model for horizontal scaling.
We can say that this one is the ideal but it is often not very easy to reach. The reasons are practical and commercial factors.
Source URL: https://blog.qatestlab.com/2012/02/08/designing-an-appropriate-performance-test-environment-part-ii/
Copyright ©2024 QATestLab Blog unless otherwise noted.