Skip to main content

Performance Testing Tutorial – starting point

I’ve noticed that the subject of performance testing is still a bit of unknown area for most Test Engineers. We tend to focus mainly on functional aspects of our testing, leaving performance, scaling and tuning to developers hands. Isn’t stability a substantial part of software quality? Especially in times of distributed computing, when we’re scaling applications independently and rely on integrations through HTTP protocol. Another aspect is an ability to scale our systems up. In order to be able to handle traffic growth, we have to be aware of the bandwidth limitations.

There’re few well known tools among engineers, such as JMeter, Gatling, Tsung, etc. Although these tools are relatively simple to use, what’s often confusing is analysing and taking conclusions from test results. During interviews for Test Engineer role I often meet candidates claiming to be experienced in field of performance testing, but they’re lacking the knowledge of any performance-related metric or elementary concepts. Since the main purpose of load and performance testing is not the toolset itself, but the knowledge you’re getting from it – the aim of this article is to gather core aspects of this area.


Performance Test vs Load Test 

One of the biggest confusion is the difference between performance and load testing. Those two terms are often use as unambiguous – but they are clearly not.

The purpose of performance tests is to find bottlenecks in our system or architecture. There’s a saying that our system is as fast as our slowest service. Lets imagine that our system consists of few different microservices. Each one of them has their own response time or estimated load. There’re even more variables in equation, like type of database, servers or even data-center localisation. Application’s users require two things: fast response time and high availability. With performance tests we can find those bottlenecks in our architecture and scale, profile or tune services independently, in order to achieve mentioned fast response time to end users.

What about high availability then? Here comes the load testing. In simple words, load testing means to exercise our systems with vast number of concurrent users or connections, which is our load. This number we’d constantly increase in order to get largest number of tasks it can handle. Load test should be performed for especially when we’re releasing new service, when we want to check if it meets our traffic estimation. The aim of this kind of tests is to check whole system availability, not performance of single services. Although the way of conducting load and performance tests may looks similar in terms of toolset and execution, they differ when it comes to result analysis and reaction.

Latency, Throughput and Bandwidth 

As we said already, most crucial part of performance and load testing is analysing results. In order to do so, you have to be aware of fundamental performance metrics. Especially in world of network-based communication, it’s important to measure latency, throughput and bandwidth:

  • Latency is the time interval between the request and the response. For example, when your service has latency of 100ms, it means that your service needs 100 milliseconds to process a request and generate a response. Usually lower latency means better user experience.
  • Throughput is the actual number of requests (or users) that can be processed by your system/application in particular time. While latency gives you only informations about time, throughput informs about volume of data which is received and processed at the moment. It’s important not to separate latency rates from throughput, because high latency can often be directly connected with throughput rates growth. Throughput is usually measured in rps – requests per second.
  • Bandwidth is the maximum number of request (or users) that can be processed. In contrast to throughput, bandwidth measures maximum rate of volume that can be handled by your application.

While bandwidth is usually constant (for given period in time), it’s very important to analyze latency and throughput together, because then those metrics give you clear view of your application performance.

Percentiles 

Having latency measured, one of the first use case that can come to mind is counting average latency from given period of time. The first statistic that one could come up with is arithmetic mean. There’s a problem though: arithmetic mean is very sensitive for big standard deviation. Since latency plots are usually quite steady with few notable peaks, percentiles are better statistic in this case. If you want to estimate average latency of your service you can use median, which is 50th percentile (p50). Remember though that p50 is still quite susceptible for statistical fluctuations. Most common statistic for measuring average response time are 90th and 99th percentiles (p90 and p99). For example, if latency for p90 is 1ms – that means 90% of cases your service is responding after 1ms.

Error rates 

As we already said, with use of throughput we can measure the volume of traffic our service is receiving, but what can we say about responses? It does matter whether we’re responding with 2xx, 4xx or 5xx HTTP codes. Here comes error rates measures. The goal of monitoring error rates is to be able to say how many (or what percent) of our responses is ok, etc. There’s always going to be some fraction of traffic responding with errors (also because of clients validations – 4xx code). Although if we’re noticing sudden peaks in error rate, it might means we’re having some troubles. You can see an example plot of error rates on image featured in this article.

Summary 

I’ve noticed lately that many Test Engineers are mastering performance testing tools, without fundamental knowledge of performance and load testing domain. In order to effectively work on scaling and profiling our systems, we should know what to measure first – how to measure comes second.

Popular posts from this blog

Notes after TestingCup 2018

On May 28-29th I attended TestingCup conference in Łódź. Having quite unique perspective: this was my second year in row as a Speaker at this conference I want to share some thoughts on the event. Dust has settled, lets go! 



Championship Originally TestingCup is a software testing championship. Wait, what? Yes, the formula is unique: teams and individuals from all around Poland are competing in finding the most bugs and defects in specially prepared application - Mr. Buggy. I don’t have specific data, but since this year’s conference was all english I guess competitors were not only from Poland. As a spectator, I must say that the whole competition looked very professional. There were team shirts and names, podium and trophies (gold cup and cash). 
Some cons? Testing championship is held at the first day of the conference. So this is already a conference, but if you’re not taking part in the championship… there’s not much to do, since all the talks are in the next day. Organizers are aw…

REST-Assured framework overview

In modern software development, REST services becomes most popular choice for implementing distributed and scalable web application. They are light and easy to maintain, which results in faster and more effective implementation and system integration.
I recommend you also my other posts about REST-Assured and building microservice’s test automation frameworks: REST-Assured: going deeperBuilding microservices testing framework
With the increase popularity of RESTful services, there is a need for fast and lightweight tool for REST webservices testing automation. One of the most popular choice is Rest-Assured framework from Jayway. It introduces simplicity of testing web services from dynamic languages like groovy or ruby to java. In this post we will get our hands dirty and write automatic test in Rest-Assured framework.
In order to create complete implementation of automated tests in Rest-Assured framework, we need to write our code against some example API. We’ll use standalone Wiremock m…

Building microservices testing framework

RESTful services and microservice architecture in general are big trends in the industry right now. I’ve noticed it also from this blog’s traffic analytics, where topics related with REST testing get the highest interest. Therefore I’ve decided to create some kind of example framework for RESTful APIs functional testing. Project is based on REST-Assured as a services client, Spock as a testing framework, Groovy and Gradle. You can clone full repository from my github. Tests are run against Wiremock API described in this post. Please consider this project as a kind of bootstrap, since it’s an example, not full-blown test framework. Ok, so as Linus said – talk is cheap, show me the code!
Dependencies Usually, first thing for me after importing new project is opening build.gradle (or pom.xml, in case of maven). In our example, the most important dependencies are REST-Assured and Spock. We’ve also Jackson, for serializing objects, Logback for our logs and snakeyaml for reading .yaml files,…