Best Practices of Performance Testing

Performance testing has proved itself to be crucial for the success of a business. It is no longer a name sake checkpoint before going live. It is indeed a comprehensive and detailed stage that would determine whether the performance of a site or an application meets the needs.

As a tester, our job is to provide the best possible feedback on the readiness of an application as early as possible in the development cycle. Of course in the real world, deadlines are always tight and testing begins later than it should. We’ve found a few simple rules that have helped make our Performance testing projects more effective.

post2-1.jpgPerformance Testing is a non-functional testing technique performed to determine the system parameters in terms of responsiveness and stability under various workload. Performance testing measures the quality attributes of the system, such as scalability, reliability and resource usage.

Types of Performance Testing are:

  1. Load Testing: Load Testing is a type of performance test where the application is tested for its performance on normal and peak usage. Performance is checked with respect to its response to the user request.
  2. Stress Testing: Stress testing is the test to find the ways to break the system. The maximum load the system can hold is tested through stress testing.
  3. Volume Testing: To execute volume test huge volume of data is entered into the database and this test verifies that the performance of the application is not affected by volume of data.
  4. Capacity Testing: Capacity testing is used to determine how many users or transactions a given web application will support and still meet the performance. During this testing resources such as processor capacity, network bandwidth, memory usage, disk capacity etc are considered and altered to meet the goal.
  5. Reliability / Recovery Testing: This test verifies whether the application is able to return back to its normal state or not after a failure or abnormal behavior.
  6. Scalability: The objective of scalability testing is to determine the software applications effectiveness in “scaling up” to support an increase in user load.

post2-2.jpg

 

Most performance problems revolve around speed, response time, load time and poor scalability. Speed is often one of the most important attributes of an application. A slow running application will lose potential users. Performance testing is done to make sure an app runs fast enough to keep a user’s attention and interest.

To achieve this let’s have a glance at few best practices

Best Practices of performance Testing

Assessment:

A proper assessment results in a better understanding from the client. Manage expectations and buy yourself enough time. Try to get answers for following question sets during assessment

  • How much work is to be done for testing purpose?
  • Which tool can be used for testing?
  • What is the data requirement for testing?
  • Test execution requirements?
  • What is customers’ expectations from testing?

Planning:

Always complete a test plan/strategy. This acts as a checklist to help you to not miss anything. Use the plan / Strategy to ensure that you include everything.

  • Make a list of all responsible people.
  • Understand the system properly and priorities the module to be tested
  • Create a list of requirement for executing the test.
  • Schedule the task according to priority.

Production environment and Performance test:

Understanding the infrastructure of the production environment and aspiring to have a production like environment for performance testing. This will help to create a production like load on the performance test environment and thus identify issues that one would have in production.

Creating and customizing test cases:

Creating and customizing a testcase generally follows these steps:

  • Record the scenario using a browser while the testing software records the actions.
  • Configure the testcase to simulate multiple user identities (for any system with a login)
  • Customize the testcase to supply different inputs (search keywords for example)
  • Replay the testcase to verify correct simulation.

Scripting:

 

One thing that can help a lot is to have your team to develop some standards around recording and playback options and for maintaining performance test scripts. In addition, documenting the recorded test scenarios should go along with that. It can help make scripts more accessible to people who might be new to the project. If they know what decisions were made and why, it makes easier to read the code and understand the intent of the test. This can sometimes allow them to make small changes without needing to re-record the test script. Below given steps can help a lot while scripting.

  • Record and Insert transaction measurements during recording
  • Correlate and Parameterize (input and dates)
  • Test with different data (not same as used during scripting)
  • Insert checkpoints for every screen.
  • Verify script execution daily (test for day, month year changes)
  • Know your scripts and learn the system’s behavior

Test Execution:

Understand how the system behaves with each script. Note any changes to this behavior when the system is under load and start pinpointing problems. It is well understood that changes become more expensive later in the project. As a result, the quicker that performance problems are found, the more easily they are fixed.

Scaling up Infrastructure:

As you proceed in your app development and start noticing performance issues, there would often be instances when you would feel like scaling the infrastructure / adding the hardware to increase the capacity and thus the performance of the system. But this will not be cost effective as with the increase in user load further, you would again wish to add the servers and at one point of time it would not remain cost effective.

Test broadly:

Test broadly, then deeply. Testing a broad range of scenarios in a very simple manner is better than testing a few testcases very deeply. Early in a project, an approximate simulation of the real world is acceptable. Time spent getting each testcase to exactly mimic the predicted real world scenario or testing dozens of slight variations of the scenario is better spent testing a wide range of scenarios.

Configuration:

Test in a controlled environment. Testing without dedicated servers and good configuration management will yield test results that are not reproducible. If you cannot reproduce the results, then you cannot measure the improvements accurately when the next version of the system is ready for testing.

Thus good configuration management is essential to get consistent load test results. If the test environment is frequently changing, it will be difficult to compare the result of one test to another.

Result Analysis:

During scripting and baseline testing is the best time to isolate problems and pinpoint them to specific area. Don’t wait for the final run, complete result summary after each test and full report at the end of a project phase.

post2-3.jpg

By allocating adequate project resources and time and by using systematic approach to performance testing you can avoid most performance pitfall of the system. However experienced you are and whatever knowledge you have gained about the technology and the system, you can never guess where the next performance issue might come from. Hence the wise approach would always be to test and retest the application with every important build.

Leave a Reply