Posted 6/15/2015 9:01:36 AM by KAUSHAL AMIN, Chief Technology Officer for KMS Technology
Far too many companies still don't test web apps for performance or so-called response time. Time and budget constraints are typically used to justify putting performance testing off.
“We’ll get to it later,” is the common stance. A few months later, as the user base swells, performance issues start to pop up in some of the most business critical areas. End users begin to uncover issues, frustration with the product grows, and developers are left scrambling for a quick fix instead of addressing the root causes.
It's a destructive cycle, but one that’s avoidable. Just like other types of testing, the earlier you find a problem and fix it, the cheaper it will be to deal with. The trouble with skipping performance testing is that you can reach a situation down the line where the only way to fix the issues permanently is to consider a major rewrite of the software.
For many companies a major rewrite is too costly and takes too long. The quick temporary fix may be to add more hardware or make a tweak or hack for short-term performance boosts. But that's not a long lasting fix. A better approach is to understand the problem areas fully and address them based on business importance.
Here are three steps to help you do exactly that.
1. Test and identify slow performing areas
It's important that you gain a clear understanding of where your problems lie, so you need to go back and do the testing that you skipped the first time around. Before you perform testing, you need to define clear and measurable requirements for the response time expected. You can break this down by following key parameters:
- Key web app screens and functionality.
- Number of concurrent users you expect the system to handle. (You may have to plan for the future here if you expect your user base and data to grow quickly.)
- Performance should be captured under four load levels: single user load, average load, peak load, and headroom load.
- Data growth expected over time. Growth in data can change the behavior of applications such as page refresh and operations. It would be good to test for what you expect data to be five to 10 years from now.
Now that we have identified testing criteria and requirements, we are ready to start testing.
There are several open source tools in the market that you can use. Some include JMeter, SoapUI, and Selenium, but there are many others including commercial tools if budget allows. It will require QA automation engineers with a lot of deep technical skills to build and execute the automation scripts to simulate the number of users for various operations.
You will also need to create a dataset of the expected volume of data in the future. Set up an isolated system that matches as close to production system as possible. Execute test cases and measure response times. You will find plenty of surprises including some operations that fail to scale at all despite a small volume of users, while others that fail more with a larger number of users.
2. Prioritize findings
Consider what the key functionality is. What are the things that you can't afford to have issues with? Get the business perspective on this. Your business stakeholders and product owners should identify what the critical functions are, not the developers or testers. Begin to establish a priority list. Fixing search functionality may be the first place to start in triaging.
Make sure you work with the business to prioritize a list of findings so that you spend your limited R&D budget in the right place. In many cases, developers will go after what is quick to fix but that may not be a priority for the business.
3. Triaging and fixing the problem
It's time to systematically figure out where is the cause of each issue on your priority list. The web apps performance issues, when broken down into layers, will usually fall into three key areas: presentation tier, business tier, data access tier, or database.
If your application is doing a majority of operations against a database, you will find that the majority of performance issues can be traced down to poorly written SQL queries or lack of database tuning; in many cases, it usually ends up being both.
So how do you figure out where the problem lies given so many possibilities? Tracking down issues in a system like this used to be a time consuming manual process requiring many debug statements and learning from debug timing stats to where code was spending the majority of time. Investigations could take weeks in some cases as you pored over log files trying to find the time sink.
But now there are lots of really great tools available for profiling and tracking. The right profiling tools will help you find where the core problem resides. For the web presentation tier code analysis, try using Firebug, Safari Developer Tools, or Google Speed Tracer. For the business tier code, it will depend on your code language. JProfiler and Visual Studio are two very good tools. Also, most RDBMS have profiler and monitoring tools that can further help in identifying slow running SQL queries (or even stored procedures) if that is one of the main causes for poor performance. I suggest getting a DBA involved once you narrow it down to database or SQL query.
Take a look at the query handling in the database. Even on the browser side you'll find extensions that show you where the time is being spent. You'll often find that the issue is not as isolated as you thought; maybe it's a system wide issue or maybe a major architectural issue requires a major rewrite.
Learn from the experience. Next time around, make sure that you use profiling tools proactively during development, and draw up a comprehensive performance testing plan. If you adopt these steps from the outset you'll have a web app that performs as it should, not just on day one, but as the user base grows.
Read More http://www.kms-technology.com/...