The Performance Test versus the Stress Test

There are many varying definitions for different types of non functional tests.  While we are not attempting to be definitive, Testing Performance would like to state what we mean by the Performance Test and how this compares to the Stress Test.

The Performance Test

Ostensibly a test in which to measure or determine the performance of an application or an application component.  Of all non functional testing, this is probably the most commonly executed type of test.

The overall purpose of a performance test is to determine if the application will be functionally correct even at high workloads.

The objectives of a performance test would be something along the lines of:

•    Determine if the application can support the expected workload

•    Find and resolve any bottlenecks

It is very difficult (i.e.time consuming and expensive) to build and replicate in a test environment an exact simulation of the workload that the application will be expected to process in production. It is much easier (i.e., quicker and cheaper) to build an approximation of the workload. Often the 80:20 rule is used to persuade project managers that an approximation makes more sense. That is, 80% of the workload is generated by 20% of the functionality. Of course, no two applications are the same, in some we can easily achieve 90:10, in others it is more like 70:30. Careful analysis by the performance tester will help determine the volumetrics for the application and therefore which functions will be included in a performance test.

Using the 80:20 rule is in essence compromising the testing effort.  While some or most performance issues will be detected, performance issues associated with functionality not included in the performance test could still cause problems on release to production.  Further steps can be made to minimise this possibility, including:

•    Manually key functions whilst a performance test is being executed

•    Observe and measure performance, especially database performance in functional test environments

Once an approximation of the production workload has been determined and agreed, the performance tester works towards building the automation into a workload that can be executed in an orderly and controlled fashion.  The work early on in the performance testing process becomes a good foundation on which to analyse and publish results, ultimately determining if the application can or cannot meet the specified objectives.

Performance tests usually need to be run multiple times as part of a series of test tune cycles. When a performance bottleneck is detected, further tests are run with an ever increasing amount of tracing, logging or monitoring taking place.  When the cause of the problem is identified, a solution is devised and implemented.  Again, the performance test is re-run to ensure the performance bottleneck has been removed.

It is of course quite difficult to determine how many performance issues will be detected as part of a performance testing exercise.  The table below is a simplistic guide to the number of performance testing cycles that may be executed depending on the origins of the application:


Applications origins         # of test tune                        # of test tune               # of test tune

performance testing                performance testing       performance testing

cycles required for                  cycles required for a       cycles required for a                                    required for first                      maintenance drop         maintenance drop

release                                 6 months after first       more than 6 months

release                        after first release


An off the shelf package with

a minimum of customisation             4                                    3                            2

An off the shelf package heavily

customised.                                    6                                    3                            2

A bespoke application                     10                                   6                            3

The Stress Test

This is a test which will not specifically measure the application.  It is designed to determine where the breaking point is.

The definition of ‘Breaking Point’ is subjective. While it could mean the complete and total collapse of one or more application components, or a collapse that would require a restart of the application, it is more likely that it would be determined by the term ‘Unacceptable Performance’.

Sorry to keep throwing definitions at you, but Unacceptable Performance is a combination of slow response times and a high rate of errors returned to the user.  It is quite simply the point where the user is unable to do any meaningful work with the application, even though the application is still servicing some requests.  A well tuned application will never collapse in a heap, it will always die gracefully, so that when the high workload is removed, the queues reduce and the application catches up.

It is normal during stress testing (and performance testing for that matter) for errors to be returned by the application to the user. A good example with a web based application is an "http 500", or "page not found". Another good example is where the correlation in the script fails due to the response from the application being incomplete or functionally incorrect.

The objectives of a performance test would be something along the lines of:

•    Determine the maximum workload that the application can support

•    Find and resolve any bottlenecks

If the application fails to support the projected peak workload, then, like the performance test, a number of test tune cycles may be required.  In effect the early stages of a stress test are simiar to a performance test.