1. Parse Now Offers Push Experiments to Allow App Developers the Ability to AB Test Push Notification Marketing Campaigns
11/7/2014 10:10:57 AM
Parse Now Offers Push Experiments to Allow App Developers the Ability to AB Test Push Notification Marketing Campaigns
Parse SDKs,Push Messaging,A/B Tests,Push Campaign,Push-To-Local-Time,JSON Push
https://news-cdn.moonbeam.co/Push-Experiment-App-Developer-Magazine_kh8nhakr.jpg
App Developer Magazine
Marketing & Promotion

Parse Now Offers Push Experiments to Allow App Developers the Ability to AB Test Push Notification Marketing Campaigns


Friday, November 7, 2014

Stuart Parkerson Stuart Parkerson

Parse is launching Push Experiments, a new feature to help developers evaluate push messaging ideas to create the most effective notifications for an app. With Parse Push Experiments, developers can conduct A/B tests for push campaigns, then use the app’s realtime Parse Analytics data to help decide on the best variant to send.

For each push campaign sent through the Parse web push console, developers can allocate a subset of users to two test groups and then send a different version of the message to each group. After setting up the campaign, developers can then use the push console to see in real time which version resulted in more push opens, along with other metrics such as statistical confidence interval. Then simply use the best version to send to the all of the targeted users.

In addition to testing content, Parse Push Experiments provides the opportunity to A/B test to determine when is the best time to send push notifications. There is also the ability to constrain an A/B test to run only within a specific segment of users such as by location.

A/B testing works with Parse’s other push features such as push-to-local-time, notification expiration, and JSON push content to specify advanced properties such as push sound. 

How Push Experiments works:

- For each push campaign sent through the Parse web push console, developers can allocate a subset of devices to be in the experiment's test audience, which Parse will automatically split into two equally-sized experiment groups. For each experiment group, developers can specify a different push message. The remaining devices will be saved to provide the opportunity to send the best message to them later. Parse will randomly assign devices to each group to minimize the chance for a test to affect another test's results. Parse doesn’t recommend running multiple A/B tests over the same devices on the same day.

- After sending the push, the push console provides access in real time which version resulted in more push opens, along with other metrics such as statistical confidence interval. It's normal for the number of recipients in each group to be slightly different because some devices that were originally allocated to that experiment group may have uninstalled the app. It's also possible for the random group assignment to be slightly uneven when the test audience size is small. Since Parse calculates open rate separately for each group based on recipient count, this should not significantly impact experiment results.

- If positive results are achieved with one message, developers can send that message to the rest of the app's devices (i.e. the “Launch Group”). This step only applies to A/B tests where the message is varied.

- When setting up a push message experiment, Parse provides a recommendation of the minimum size of the test audience. These recommendations are generated through simulations based on an app's historical push open rates. For big push campaigns (e.g. 100k+ devices), this recommendation is usually a small subset of devices. For smaller campaigns - below 5k devices - this recommendation is usually all devices. Using all devices for a test audience will not leave any remaining devices for the launch group, however developers can still gain valuable insight into what type of messaging works better to implement similar messaging in the next push campaign.

- After sending pushes to experiment groups, Parse will also provide a statistical confidence interval when the experiment has collected enough data to have statistically significant results. This confidence interval is in absolute percentage points of push open rate. For example, if the open rates for groups A and B are 3% and 5%, then the difference is reported as 2 percentage points. This confidence interval is a measure of how much difference one would expect to see between the two groups if the experiment was repeated several times.

- Immediately after a push is sent, when only a small number of users have opened their push notifications, the open rate difference seen between groups A and B could be due to random chance, so it might not be reproducible if the same experiment is run again. After the experiment collects more data over time, there is increasing confidence that the observed difference is a true difference. As this happens, the confidence interval will become narrower, allowing Parse to more accurately estimate the true difference between groups A and B. So it is advised to wait until there is enough data to generate a statistical confidence interval before deciding which group's push is better.

Push A/B testing works with existing Parse SDKs - iOS v1.2.13+, Android v1.4.0+, and .NET v1.2.7+. To try it out, use the Parse web push console and enable the “Use A/B Testing” option on the push composer page.



Read more: http://blog.parse.com/

Subscribe to App Developer Magazine

Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.

MEMBERS GET ACCESS TO

  • - Exclusive content from leaders in the industry
  • - Q&A articles from industry leaders
  • - Tips and tricks from the most successful developers weekly
  • - Monthly issues, including all 90+ back-issues since 2012
  • - Event discounts and early-bird signups
  • - Gain insight from top achievers in the app store
  • - Learn what tools to use, what SDK's to use, and more

    Subscribe here