Using Historical Data to Improve Quality through Analytics
Tuesday, June 28, 2016
Bill Hayden |
As user expectations escalate and development and testing costs continue to increase, organizations are seeking additional mechanisms for gaining more insight, earlier, to improve product quality. One contributor to this effort is data analytics and visualization.
I am not referring to the test data extraction and analysis that is already a vital component of the testing process. A staggering amount of data from other sources can now be leveraged to improve software outcomes. One of the richest sources of data for improvement is historical failure data extracted from in-house databases, many of which are so large that they have traditionally not been leveraged very effectively. This information can provide powerful input to the quality feedback loop and help teams work together more productively.
The Value of Historical Databases
Historical information is already being used for analytics of all types in many places. For example, cities are using historical traffic and incident information to predict where and when accidents may occur. They then stage more response units in those areas at the appropriate times. For software improvement, historical data can be used to predict problems and reduce the incidence of trouble.
Fault Reduction
I have worked directly with teams who use their historical defect data to predict where and when defects are most likely to be introduced. The point isn’t to predict what the defects will be - that information will change from one release to the next.
Rather, the benefit comes in pinpointing where and when defects are likely to pop up. For example, let’s assume an organization has 10 years of fault data in its database. If the firm analyzes that historical data, it could detect patterns of failure - specific instances where new code is introduced to the code base, or specific test cases were run, that have tended to cause failure.
Numerous studies indicate that tests with higher historical fault detection rates are more likely to fail in the current version, as well. With the help of more powerful data analytics, organizations can now identify those instances and take steps to address them proactively.
Additionally, companies can also identify breakdowns that occur only periodically. For example, consider performance testing. Many companies already evaluate data from faults that appear from one release to the next, especially when a new service is introduced to the transaction chain. However, historical analysis provides perspective that isn’t apparent for such a recent timeframe.
With historical analysis, teams might detect that a particular service call causes problems every year. Such a problem might not normally be detected if it were resolved by the next release update. Identifying that occurrence through historical analyses would let the organization explore and identify the cause of the problem, which could be due to something external, such as a scheduled update event by the service provider. They could then rearrange the release schedule to avoid its impact.
Test Execution History
Organizations can glean valuable information by analyzing historical test execution history, as well. This information is especially valuable in continuous integration environments where new or changed code is frequently being integrated with the main codebase. In these scenarios, the absence of coverage data makes test validation and prioritization more difficult.
Test execution history provides information regarding a test, i.e., whether it passed or failed (a fault was detected). With traditional processes, a test case from a current release would be considered effective if the same test also failed in the previous releases. However, for new code being integrated, there may not be a history-based pass/fail metric.
Organizations that analyze their historical test execution data could potentially identify sufficiently similar test cases and use those for comparison with the new test case metrics, instead. To do this, the team would collect test case execution traces—the sequence of method calls (a set of statements that can be invoked through another statement). Two or more test cases with very similar method call sequences - even though they appear on their faces to be very different - can be effectively used for evaluating pass/fail metrics for test case validation and prioritization.
Putting the Data to Use
Numerous organizations have debuted entire ecosystems devoted to big data - software platforms for extraction, analytics, and visualization, as well as hardware, governance tools, and more. (One that is currently creating a buzz in Orasi circles is the release of SAP HANA - which will in part enable big data analytics and visualization for users of SAP. That’s a game-changer for firms that rely on this enterprise resource planning platform, given the amount of data these systems generate.)
These offerings have traditionally not been (and in many cases, are still not) designed for use in software development and testing. I am confident this will soon change. In the meantime, proactive software developers are making productive use of data by creating their own in-house systems (and mining various internal databases).
Organizations such as Orasi, which specialize in software quality assurance, are helping enterprises build and deploy quality frameworks that can leverage tools to better analyze and visualize their data.
We are at the very beginning of the journey to use data analytics - not only historical data but also real-time information - to learn from our mistakes and improve software across the board. The potential cannot be even imagined at the present, because these technologies aren’t merely enabling transfer of data from one ecosystem to the next. They are helping companies translate it, as well, into knowledge - and wisdom.
For organizations with the corporate culture, team attitudes, and development and testing processes to accept, evaluate, and address that knowledge promptly, the future looks very, very bright.
Read more: http://orasi.com
This content is made possible by a guest author, or sponsor; it is not written by and does not necessarily reflect the views of App Developer Magazine's editorial staff.
Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.
MEMBERS GET ACCESS TO
- - Exclusive content from leaders in the industry
- - Q&A articles from industry leaders
- - Tips and tricks from the most successful developers weekly
- - Monthly issues, including all 90+ back-issues since 2012
- - Event discounts and early-bird signups
- - Gain insight from top achievers in the app store
- - Learn what tools to use, what SDK's to use, and more
Subscribe here