Blog

How to Write Effective Performance Testing Report?

Performance_Testing_image

Performance testing is conducted to check the application performance (response on different user sets) before making it live. The outcomes of the activity are like, how many users can the application handle without performance degrading, which pages/transactions are taking longer time, how much time does it take to recover from state of crashing and where the performance bottlenecks exists etc.

Performance testing is a complex activity which includes lots of factors (e.g. Response Time, Throughput, Hits per Second and system resources etc.) which are not easy to understand for a layman. So, developing a good performance test report which answers all the performance parameters/concerns of the application in an easy and understandable manner is a big challenge for the performance testers.

Performance-Testing-Report

Before elaborating the actual topic, there are few pre requisites for a good performance testing report generation (e.g. understanding the application infrastructure, domain knowledge and knowing the success and failure criteria) that needs to be understood. It is important as; without application infrastructure and its usage pattern information, it becomes very difficult to setup the testing environment that should be almost the exact replica of the actual environment and it’s always hard to interpret the results when you don’t have a defined pass and fail criteria. We usually find this problem while conducting performance testing because clients don’t pay attention to it and it affects the results at the end.

Performance testing is done through different automated tools and every tool generates its own report as well. Although useful graphs and their values are displayed for different performance parameters but the question is that can we only rely on these reports? The answer is NO, because first of all these reports never give the idea of the test scenario, testing environment or pass/fail criteria etc. and secondly you always need to interpret them for drawing the conclusion.

Now another option is to develop the manual report by inserting all the relevant information that the tool report may have missed plus the tool results. But again it makes it difficult for the clients to understand (with graphs) and more importantly to believe those results which involve practically no visible input from the tool. I faced this situation in one of my performance testing projects where I only relied on the manual report and client was unable to understand it properly and we had to show and explain the tool generated reports afterwards.

So, after experiencing both the above approaches; a mix of both (manual and tool) reports is the right choice. It wouldn’t be wrong to say that the best way to get the most out of performance test reports is to define testing environment, test scenarios, Pass/Fail criteria etc. manually and then add tool results including their graphical representation and provide their proper analysis manually.