What’s the ideal strategy for load testing? Some believe that a fast and efficient load testing is your best option. Others say a realistic test is a better use of your time.
Most testers lean toward the realistic test, but realism can be incredibly-consuming.
It not only delays testing but usually leads to unnecessary risks.
In Extreme Programming Explained, authors Kent Beck and Cynthia Andres advise that catching the issues early on and fixing them before they become worse is ideal for a development lifecycle.
A third option is to go with “good enough” testing. In many cases, this approach leads to better results.
If you spend 20% of your time on configuring the test and still learn 80% of what you needed to know, then you can find problems when they are easy and inexpensive to fix.
The 80/20 Rule For Load Testing
The 80/20 rule is all about the idea that 20% of your efforts lead to 80% of your results.
Creating a load test is a great example of the 80/20 rule. A load test configuration is hard.
There are a lot of parameters you have to tweak in order to obtain the type of user behavior and simulation you need.
A lot of people get into load testing with the idea to simulate the perfect reality through their load test and know exactly how many users their production system can take.
The problem is, simulating reality is really hard and requires a lot of time and energy to make the test happen and the majority of those times those efforts won’t hit their intended target. And therefore making an unrealistic simulation.
This leads to an incredibly pricey undertaking and time-wasting load test that might not give you results you seek.
Any companies, organizations etc. who spend a lot of time and resources on load testings are going to avoid doing one unless they absolutely have to.
That usually means they don’t do a load test until right about the launch and the time to fix anything is gone. Not to mention the cost of fixing the problems will be very expensive.
Here’s where the 80/20 rule can make all the difference.
If you don’t aim to simulate reality perfectly, the configuration will be simplified. 20% of all your efforts will be in play instead of 80%.
Your results might not be the exact number your production can handle, but it will tell you if your performance is lacking and identify your performance bottlenecks.
And because this kind of test is easier to configure and run, you could run even more tests throughout the stages of development and discover a lot more than you would have from a solo test.
Running more tests early in the process also enables you to identify problems and performance issues earlier, which mean less money spent and a shorter development phrase.
These sound like a lot of good ideas, but something you can’t overlook is how to put them into practice?
How exactly do you cut away the complicated elements of your load testing variables in every situation? A good idea is to is use a simpler model of user behavior while designing your load test.
The simulated user might be a bit more static than a real user, a couple of set sequence of pages and no randomization.
Or, perhaps, you could have simulated users try a randomly selected page on your website and ignore that some pages are visited more often than others.
If you’ve ever tested an API-driven application such as a mobile app, you could find it more helpful to not to focus too much attention to user flows and focus instead on API endpoints, individually or as a mix.
If you can find out how many cells per second API endpoint X will have 1,000 users, you could run a test that creates that exact number of requests per second and see how the backhand deals with it.
This will reveal where the bottlenecks are in your system and do the same for each individual endpoint. (Exercising many endpoints in the same test is something you could do as well.)
Usually, you won’t receive much more information from proper simulating user flows and complicating the load test configuration. The most simple test you can run is hammering a single API endpoint one at a time.
It’s a cinch to configure and if you can run that test again and again for every endpoint during the application’s early development phase, you’ll be in good shape for the performance release date.
Complicated tests have their place too. But if you’re running simple stress tests more often, then starting there is probably the best bet before you go for the more expensive tests that simulate reality.
Here is a quick list of the benefits of simple tests.
- Simple tests need less time and energy
- Running more tests is possible due to the simplicity
- More tests enable further opportunities to improve your test setup
- Frequent and smaller tests enable you to run tests earlier in development, catching issues sooner and decreasing the risk of costly problems and prolonged delivery dates
- You’ll also learn what types of traffic don’t put unwanted stress on your backend and what does
- You’ll discover how your back end responds and get a solid idea what you should be looking for, what logging functionality to rely on and how to maximize your testing.
This results in you being more prepared for bigger and more expensive test. The larger test will feel small after the smaller one.
Good Is Better Than Perfect
Real has value, but don’t lose sight of the downsides.
Aiming just for a perfect simulation could result in you missing the core issues. Start small and then eventually go for the bigger scale testing.
This doesn’t mean problems won’t come up in later development even if you do smaller tests. That said, smaller testing can help avoid last-minute issues.
If you can get 80% of the results you need with 20% of the effort, you’ll have more time to deal with other projects.
For more information, contact us here.