Performance Testing

Performance testing is a form of software testing that how an application is performing in a defined environment under specified load. Performance Testing is conducted to evaluate the compliance of a system or component with specified performance requirements. It checks the response time, reliability, resource usage, scalability of a software program under their expected workload. It also used for capacity planning purposes. The purpose of Performance Testing is not to find functional defects but to eliminate performance bottlenecks in the software or device.

Types of Performance Testing

Load testing – Measures the application performance with different load. Target is to checks the application’s ability to perform under anticipated load in production. The objective is to identify performance bottlenecks before the software application goes live.

Stress testing – Meant to measure system performance outside of the parameters of normal working conditions. It involves testing an application under extreme workloads to see how it handles high traffic or data processing. The objective is to identify the breaking point of an application.

Soak Testing or Endurance testing – Make sure the software can handle the expected load in normal conditions over a long period of time. This identifies memory leakage kind of issues in application.

Spike testing – Evaluates software performance when workloads are substantially increased quickly and repeatedly. The workload is beyond normal expectations for short amounts of time. This also ensures that after the spike in the load when load decreases application performance also returns back to normal.

Volume testing – Determines how efficiently software performs with a large, projected amounts of data.  In this type of Testing large amount of Data is populated and the overall software system’s behavior is monitored. The objective is to check software application’s performance under varying test data volumes.

Scalability testing – Determine the software application’s effectiveness in “scaling up” to support an increase in user load. It helps plan capacity addition to your software system.

General Test scenarios:

  • To determine the performance, stability and scalability of an application under different load conditions.
  • To determine if the current architecture can support the application at peak user levels.
  • To determine which configuration sizing provides the best performance level.
  • To identify application and infrastructure bottlenecks.
  • To determine if the new version of the software adversely had an impact on response time.
  • To evaluate product and/or hardware to determine if it can handle projected load volumes.

Here are few examples:

  • Verify response time is not more than 4 secs when 1000 users access the website simultaneously.
  • Verify response time of the Application Under Load is within an acceptable range when the network connectivity is slow
  • Check the maximum number of users that the application can handle before it crashes.
  • Check database execution time when 500 records are read/written simultaneously.
  • Check CPU and memory usage of the application and the database server under peak load conditions
  • Verify response time of the application under low, normal, moderate and heavy load conditions.
  • Check if the page load time is within the acceptable range.
  • Check the page load on slow connections.
  • Check the response time for any action under a light, normal, moderate, and heavy load conditions.
  • Check the performance of database stored procedures and triggers.
  • Check the database query execution time.
  • Verify number of open DB connections under peak load.

During the actual performance test execution, vague terms like acceptable range, heavy load, etc. are replaced by concrete numbers. Performance engineers set these numbers as per business requirements, and the technical landscape of the application.

Performance Testing Metrics are Measured

Response Time
Wait Time/Latency
Average load time
Peak Response time
Error Rate
Concurrent Users
Throughput/Requests per second
Transactions Passed/Failed
CPU Utilization
Memory Utilization
Garbage Collection frequency
Reads/Writes Per Seconds
Connection Pooling information
Load balancing on different instances/nodes of any system

Performance Testing Myths or Fallacies:

QA Should design the performance test scenario

Mostly teams ask QAs to prepare performance test scenarios which is not right way. This must be done by the architect of the application who understand the actual technical implementation in co-ordination with solution lead or business analyst who understand the actual usage of the application. Most critical path should be covered including most common flows.

More hardware can fix performance issues: Adding processors, servers or memory simply adds to the cost without solving any problems. Many times application performance degrade is more than required hardware like memory is added.

The Thorough Testing Fallacy

Thinking that one performance test will prevent all problems, itself is a problem. When we go about performance testing, we intend (due to time and resource restrictions) to detect the riskiest problems that may have the greatest negative impact. There are two types of people, one who thinks just one scenario is good enough while others think each functional scenario should performance tested. While I recommend to identify most critical path and common flows.

Performance testing is the last step in development

As mentioned in the section on performance testing best practices, anticipating and solving performance issues should be an early part of software development. Implementing solutions early will less costly than major fixes at the end of software development.

Software developers are too experienced to need performance testing

Lack of experience is not the only reason behind performance issues. Mistakes are made — even by developers who have created issue-free software in the past. Many more variables come into play — especially when multiple concurrent users are in the system.

Open source tool can achieve the goal

Many thinks that no need to spend money in performance testing tools. Open source tools like JMeter can provide all which is not correct. No open source tool provide support for asynchronous  calls triggered from application. And managing manually is technically feasible for is not practical.

Leave a Reply

Your email address will not be published. Required fields are marked *