Mobile Marketing Insights with Adobe Target's Drew Burns
Monday, July 20, 2015
Richard Harris |
We recently visited with Drew Burns, senior product marketing manager for Adobe Target, who works to evangelize the practice of iterative testing and content targeting in the digital marketing world. The subject of our chat was how to unlock the full potential of A/B testing to help mobile developers with their marketing efforts.
ADM: What exactly is A/B testing? We know most have an idea about what it is – but what does it really look like from a developer’s perspective and why does it really matter?
Burns: A/B testing is essentially a practitioner’s best or most calculated guess (i.e., testing a hypothesis) as to what will improve a visitor’s experience in an app. There are three fundamental areas to focus on from an app perspective:
1) Improving the essential function of the mobile app. The mobile app has a key role that it plays for the customer who downloads it, usually related to customer loyalty (e.g., checking their account/status, relevant information on a product or quick purchase/upgrade, etc.). Developers should ask, “Does this functionality provide the best ease of use and relevancy for the customer relative to this core function?”
2) Are you positioning and targeting additional app offerings and marketing offers most effectively for your key audience segments within the app?
3) Are you making the best use of contextual, location-based targeting and the rules that govern both in-app and push notifications, based on proximity or iBeacon technology?
A/B testing can provide real-time and ongoing answers to these questions as the mobile app experience changes over time.
ADM: In A/B testing, what is the most important change to start with, that differentiates A from B?
Burns: Great question, and one we receive frequently. This requires ongoing assessment, based on what success metrics are most important to the business, the test results and insights you are achieving, and a cost-benefit analysis in terms of execution. This is what we call ongoing test roadmap assessment, and something upon which our consultants work very closely with our practitioners.
In the beginning, they focus on high value locations; these are most often acquisition points in the Web and mobile site space, such as the pop-up or smart banner that prompts the user to the app download, the onboarding process where the user initially registers within the app, the home screen they are engaged with on an ongoing basis, and the conversion point (a critical point where you have a very valuable, captive audience that is ready to convert).
We then apply a strategic testing methodology to evaluate what to test, related to the assumptions we’ve made (either based on analytics data or intuition around our customers) in how we present and target our content. First, is the presence of this content in this location necessary? Next, does it serve its core function effectively for all audiences? Is its design and presentation optimal for key audiences? And is its message most effective for different audiences (i.e., is targeting effective and do we have the most effective rules in place for how and when our key audiences are targeted?).
ADM: Most developers have limited budgets, so when it comes to marketing, the shotgun approach is usually what sticks (one creative, one landing page, take it or leave it). What is the easiest change to make in the beginning so developers can more easily adopt A/B style marketing later?
Burns: Quick wins can be hiding or removing extraneous content or elements. Often we can see big improvement to clearing our design and making key functionality more prominent. Also, beginning to target messaging to more obvious key audience segments can produce massive lift, if you haven’t done this before (e.g., targeting discount pricing offers to audience segments driven by Sales). Basic location-based targeting related to proximity can also lead to big wins out of the gate, based on making audience segments aware of what is easily accessed based on their surroundings.
ADM: How long is too long? Time is money, and in today’s marketing, clicks and impressions are money. When A/B testing how long should a developer wait on any test before deciding to deactivate a bad non-performing campaign and keep losing money?
Burns: Best practice dictates when results reach significance. This most effectively reduces the risk of a false positive result, and shortcuts only lead to taking more risk in the business decisions you are making without the most qualified results in terms of accuracy.
There is core statistical methodology that we need to adhere to. Effective calculating measures in terms of time and include a purpose-built test calculator that are provided by some solutions that allow you to effectively gauge time to results. Analytics can also provide the context for what will be a more effective hypothesis to test to provide the best results.
Machine-learning algorithms, however, can be a massive win in terms of qualifying a test hypothesis and potential results. While a test hypothesis is focused on improving one or a handful of elements for particular audiences, applying content options to an algorithm can help to surface what audiences are reacting to particular experiences, through a full evaluation of all of the profile variables (i.e., customer data available). This can help immensely in refining your strategy down to what is the most productive and lucrative use of your testing efforts.
ADM: Even the bad is good, so when A/B testing occurs and one campaign out performs another what are the take aways from the bad campaign a developer can learn from?
Burns: I love this question. We’re often looking at the wins and not taking into account the value derived from removing a poor performing variation, or the opportunity cost of not introducing an experience that is less than satisfactory to a key audience of our visitor population. That is why we work with our customers to evangelize all of the ROI, including insights that they see with their test results, which include the conversion lift, but also the time/cost savings achieved through identifying what doesn’t work.
Sometimes the insights within test results are clear; this can often come down to how the test was set-up to isolate why a particular group preferred one variation over others. However, follow-up in terms of a multivariate test can be helpful as they provide the ability to isolate the relative element contribution within a particular experience related to the conversion metrics being evaluated.
For instance, we can isolate if the messaging, imagery or call-to-action are the most effective elements to focus on, refine or target based on what success means for our business. This is also why evangelization of a comprehensive evaluation of what our testing practices derive is so necessary for scalability, success and maturity of our optimization program. Think of regular newsletters, a wiki, or even a bulletin board where results are highlighted along with recommendations on next steps based on the insights we gained from these results.
Read more: http://adobe-target.com/
Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.
MEMBERS GET ACCESS TO
- - Exclusive content from leaders in the industry
- - Q&A articles from industry leaders
- - Tips and tricks from the most successful developers weekly
- - Monthly issues, including all 90+ back-issues since 2012
- - Event discounts and early-bird signups
- - Gain insight from top achievers in the app store
- - Learn what tools to use, what SDK's to use, and more
Subscribe here