Enterprise 30,656 views
Posted Friday, October 02, 2015 by JOE SCHULZ, Contributor
READ MORE: http://www.orasi.com/...
With both mobile device usage and user expectations continuing to escalate, application performance is becoming more important than ever. At the same time, the complexity of the devices themselves, paired with rising security precautions that are locking users out of public Wi-Fi networks, are making it more challenging for developers to ensure consistent performance under all conditions.
Yet, as a specialist in mobile application development and quality assurance, I am continually surprised at the number of organizations that are not fully embracing the potential of application performance testing. Admittedly, the crunch conditions common with today's release cycles can make comprehensive performance testing a challenge.
Nevertheless, current tools and approaches can greatly expedite and streamline best practices performance testing. Embracing the recommendations I will present here is a good place to start.
Users Take Control
Users are firmly in the driver's seat now, which surprises no one. However, the speed with which both usage and expectations are increasing has taken many aback. Consider these statistics, which illustrate the degree to which mobile usage is impacting our world - and the bottom lines of developing entities:
Recent research underscores the trend towards "smartphone dependence."
- 51% of users now rely on their smartphones as a primary source for consuming digital media (report - Kleiner Perkins Caufield & Byers; 2015), with desktop PCs nearly 10 percentage points behind.
- 44% of users report they have difficulty performing various important tasks without their smartphones (Pew Research Center; 2015).
An HP-sponsored report from Dimensional Research (2015) confirmed that user expectations remain high.
- 61% of users expect apps to start in four seconds or less.
- 49% expect apps to respond in two seconds or less.
- 80% will attempt to use a problematic app three times or less.
- 37% stated mobile app crashes or errors make them think less of a company’s brand.
Most importantly for our discussion, perhaps, 55 percent of users hold apps responsible for performance issues. So important is performance, per the Dimensional Research report, that one of its key findings stated, "The key to loyal customers [with] mobile apps is directly related to mobile app performance, stability, and resource consumption."
Top Considerations for Performance Testing Mobile Applications
Although the considerations for mobile app performance could fill volumes, for this article I'll outline some of the top "best practices" for mobile performance testing. Adopting some (and preferably all) of these will help testers take big leaps towards high-quality, well-performing apps. Some recommendations also simplify testing complexity and reduce time and resource consumption, which is always a benefit when teams ask decision-makers for more funding.
Evaluate network quality across a variety of providers
Latency tends to be higher on mobile networks than on wired or wireless ones. Connection quality is more unpredictable, and connections come and go. Furthermore, users may be roaming onto third-party networks, resulting in significantly reduced network speed. The use of virtualization or third-party network emulators can help facilitate testing of these varied network conditions. They also help reduce testing expense by eliminating the need for costly carrier contracts.
Users in different regions of the country - and with different terrain conditions – will experience varied response times and affect your back-end infrastructure differently. It is vital to test for a variety of geographies, which can be accomplished through crowd testing, cloud-based devices, and other means.
Test for low-capacity conditions
Test low-memory, out-of-storage, and low-battery conditions. They occur more frequently than many testers assume.
Test entire mobile product families
Performance and available resources vary considerably within product families. For the Apple iOS, testers should consider iOS as well as iPhone 4 through iPhone 6s. Android is even more fragmented, with at least four models in the Samsung family alone - S3, S4, S5, and S6 - still widely in use. Tests should be designed to identify the minimum acceptable response for each OS and device. Gaining access to and testing on a wide array of models is both expensive and cumbersome, so device emulation is especially valuable here, as well.
Don’t script at the device level
Using emulators from manufacturers, or device-specific scripting languages, will result in endless branches of the same tests, resulting in more script development and maintenance. Instead, use third-party tools with a “Script Once, Run Many” model so that you only have one script to maintain for each use case. Although it’s impossible to have truly device-agnostic scripts because of the inherent differences in devices and operating systems, the goal is to find a scripting environment where as many characteristics as possible are abstracted.
Don’t test for“ desktop” behavior
Mobile users not only rely upon their mobile devices more often; they also engage in more short bursts of activity than their desktop counterparts. For example, mobile transactions are usually much shorter than desktop transactions because users perform one function, immediately, while they are thinking about it. Mobile transactions also occur with extreme frequency, so test hardness has to be designed with much higher thresholds than traditional performance tests.
On the processing side, mobile transactions are also more likely to be relegated to the background in the middle of execution, so tests should include randomized waits to simulate calls or texts interrupting the flow. Overall latency must be increased to simulate different network types.
Use emulation in performance tests
With some 200+ mobile device models in the marketplace, it is impractical to manually test on even half of them. Many companies pick only the top 20 or even 10 models for testing, which is simply not a sound strategy. Device emulation with automated testing, in particular, is a cost-effective way to test a sufficiently broad array of models to accurately reflect the user landscape. It is also proven to reduce testing costs and boost quality over manual testing at the user experience level.
Always test end-to-end timings – from device processing to back-end server response
There are so many different performance components to measure, and a good test not only identifies the defect but also points to the solution. So, without testing end-to-end and isolating the component timings, the test does nothing more than raise a flag.
To be at its most helpful, a test should also point out which performance component is out of line with comparable tests - or missed a designated threshold.
At a minimum, end-to-end testing should measure all of the following:
- Content request time
- Content delivery time
- Render time
- User gesture response
- General UI responsiveness
The "ubiquitization" of mobile devices – and especially the smartphone - should be a wakeup call to developers that robust performance testing is no longer an option. The reality is that many users now spend the majority of their free hours using their smartphones. The instant nature of the Internet is also shortening everyone's patience and raising their expectations.
Finally, thanks to social media, user dissatisfaction has become contagious and can go viral. No one wants to be the focus of #appfail tweets, which can literally damage the brand or corporate reputation. App developers and testers are caught in the crosshairs of this barrage of challenges, and it's an uncomfortable and unavoidable place to be. Taking proactive steps to assure maximum performance is the best way to avoid becoming a casualty.
READ MORE: http://www.orasi.com/...