One of the challenges for companies developing mobile applications is to ensure the app will offer consistent performance across the entire transaction lifecycle (app server, networks and device). Failure to do so can result in extreme user disappointment.
Equation Research conducted an online study about mobile app usage, expectations and experiences. Of more than 3,500 smartphone users polled, 84% said they consider mobile app performance to be somewhat or very important, and 79% indicated they would retry a mobile app only once or twice if it failed to work the first time.
Perhaps most ominous for app developers, 38% indicated that dissatisfaction with an app would cause them to switch to a competitor's app, and 31% said they would tell others about their poor experience. Admittedly, apps can launch slowly, freeze or crash for many reasons. However, even the most beautifully coded app stands little chance of operating properly if there are problems with connectivity or throughput somewhere in the transaction chain.
App Server Tests Aren’t Enough
The first and most basic level of testing is to verify load capacity at the app server level. In my experience, most developers use one or more simulation and load generation tools to test how well the app server handles various loads of concurrent users. (If you're not doing even this level of testing, I strongly urge you to start, right away.)
While app server load tests are a critical first step in fine tuning the application and the internal architecture, many developers assume that great response from the app server equates to a great experience for the user. In reality, although the app server may respond correctly, if latency is excessive at any other step in the connection, the effect to the user may be a lock up or a crash—resulting in user dissatisfaction or abandonment.
Without testing all the load variables that are unique to mobile, you're like an automotive manufacturer that only tests the responsiveness of its vehicles on smooth pavement in perfect weather conditions. Variables for which you should test as completely as possible to validate the transaction lifecycle include the following.
Operating Systems: Load different operating systems (and different versions) into your performance testing profile. Don't forget about upcoming releases that users may adopt quickly.
Devices and Their Firmware:
A user's device and its components—especially the processor—can have a huge effect on performance (and, therefore, the user experience). The firmware versions of that hardware will also affect response time, so test as many hardware and firmware variations as practical.
Networks: Networks are perhaps the biggest variable in transaction-lifecycle testing for mobile apps. When an app user moves from 4G to 3G, well-written apps readjust their parameters for the change in throughput and latency; others freeze or crash. The same is true for performance delays that can occur when data packets move from TCP/IP to cellular networks and back.
Each network provider is unique and will have its own individual nuances, one of which is the variation in time-out tolerances. AT&T's might be 10 milliseconds and Verizon's might be 30 milliseconds (these are examples, not actual figures). Your apps should be designed to resend data in the event latency causes a network with low time-out tolerances to reject the initial connection request.
Does It Have to Be Real?
Earlier, I alluded to using simulation tools to generate transactions when you load test an app server. Network simulators are also a practical solution if you don't have a legion of beta testers around the globe who will test your app under various network conditions.
In these situations, make sure the network profiles you establish are sufficiently diverse—and that you carry that diversity over to your app server testing, as well. If 100 devices are interacting with your app server with a 50/50 3G-4G network mix, the duration of resource consumption—and therefore the load—will be completely different than if 90% of them are on 4G.
However, simulators alone are not the answer. For testing operating systems, devices and firmware, I recommend app developers simulate 95% of the load with a tool and 5% with real phones where the tester can evaluate and record the actual user experience.
Of course, economics come into play, as well, with testing, and the number of variations for which you could test in today's market is nearly infinite. A good rule of thumb is that 80% of your users will primarily operate on 20% of devices and networks. Perform research to determine your target market's preferences and build performance-test profiles for that top 20%. If you don't have the budget to deploy in-house simulation systems, look to vendors that offer cloud-based, real-device and simulated testing in affordable time blocks.