An Eye On Quality: Testing In the Absence Of Performance Benchmarks
- March 20, 2020
- Hiba Sulaiman
Testing without requirements? Is that really possible, and isn’t it a cornerstone of quality? Well, functional testing with the help of documented requirements is the norm. Software testing teams need to pass/fail criteria when testing various features and functionalities. However, system and app performance evaluation are subjective and criteria are required to test whether a system is performance good or bad.
Typically, software applications are tested for certain requirements. But sometimes, app performance requirements lie submerged, because the project owners have to meet deadlines and add features and require focusing on bug-fixes into an application while working on the project, that they are left with no time to focus on performance requirements. This calls for performance testing companies to come up with solutions to tackle such instances.
Test plans should come with requirements, so make sure there are standard benchmarks in the agile sprints when conducting performance testing. The plans should include pass/fail criteria based on these requirements. Before beginning with performance testing, it is preferable to have the following in place:
- Goals (can be subjective)
- And requirements (objective-based)
Quit Or Accept The Challenge
If testers have been assigned with a performance testing project that lacks requirements, they can either give up and retreat. But this can result in some sort of consequences. On the contrary, they can fight back and insist that the project manager assigns tests that have some success criteria for their testing efforts. In the absence of feedback from those in charge of the responsibility, testers have to perform the task. The first challenge they would come across is to identify those responsible for the app performance goals, expectations, and non-functional requirements.
If QA testers are still not receiving the requirements, and they choose to stay the course, they should ensure they state in their test plans about their situation in order to save themselves in the future. They should assume that they are in an exploratory mode and have committed to work without any formal requirements. Testers have to set performance benchmarks and create requirements and goals.
Testers should begin to document whatever they are going to do during the performance testing processes. Once they complete test execution, they should report their findings. It is better to avoid any sort of analysis at this stage, as it can affect your app’s performance. The tasks should determine the performance-related status of the system under test within the constraints of the tools provided to them. It also includes the test environment and the state of the code during testing.
Test Reporting Basics
If testers do not have requirements, they are expected to construct their own performance testing requirements. The following basic information should be provided as the conclusion of their performance testing efforts. Performance testing companies should be able to assist their testers to set the testing goals should provide the following results for the system under test:
- The ability of the app being tested to support the number of concurrent users
- At what speed was the work performed? What were their response times?
- What was the work throughput under the variable test conditions?
Testers should ensure that their testing goals should address the size, speed, and rate of change. The test plan should provide the following:
- How many systems were used during the test compared to a quiet and steady-state in the test environment? What was the size of loads?
- How fast did the units of work process?
- How did things change during testing that could respond to varying the load applied during testing?
- What was the rate for system consumption and response times in the graphs during the testing?
The rate of change can be a useful metric in the creation of a performance requirement.
Do The Math
Do you know the volume of work units? For instance, 100 insurance policies are created in an hour, can the number of users use the system at peak hours be determined? Testers should formulate a peak load by multiplying the average by an arbitrary coefficient. They should try to determine how long it would take for an average user to create one record of some predefined time limit.
The expectations for response time and system resource consumption can be derived by researching how other applications are performing. Unfortunately, when it comes to assessing system resource consumption (CPU, etc.) there are no industry standards available. The tester will have to work with system engineers and architects to decide whether the resource consumption during testing is successful.
Setting Response-Time Limits
Testers can use industry-standard response time limits, they may be useful in establishing performance requirements. The Nielsen Norman Group has presented the following three response-time limits that can help construct benchmarks for a project’s application:
- Instantaneous Response – (1 second) This level of responsiveness is important to support direct manipulation. It is one of the key GUI techniques to enhance user engagement and control.
- 1 second enables a seamless thought process for users. Users sense a delay, and that the system is generating an outcome. Yet, they still feel in control of the system and that they are free to perform any task rather than waiting for the computer. This degree of responsiveness is a good tool for smooth navigation.
- 10 seconds – They involve keeping a user’s attention. It is normal for users to face a delay from 1-10 seconds. After this time period passes, they face inconvenience and think about other things, until the computer responds.
Testers can be posed with multiple challenges when they do not have performance testing requirements. Performance testing companies should be prepared to face similar situations. This is where QA testers perform their magic and provide their value as creators and developers of technical assets. We hope the above steps will be helpful in quality assurance by setting performance standards when metrics are missing. Happy performance testing!