Best Practices: Monetization A/B Testing
A/B testing is a crucial component of every business strategy and involves testing your assumptions against a control to ensure your new approach maximizes revenue and grows your business. With Unity LevelPlay’s A/B testing tool, you can experiment with different monetization variables to understand how your users engage with ads, and choose a winning strategy.
A/B testing requires smart planning and defined goals to get the most conclusive results. The following best practices will help you conduct clean tests using the Unity LevelPlay A/B testing tool, so you can make the best decisions based on accurate data.
Best practices for setting up an A/B test
- Identify your goal: Take some time to think about the KPIs you want to improve, and how significant you want the improvement to be. If multiple KPIs can be affected, decide which carries the most weight in case they’re impacted differently. This will ensure you examine your results effectively so you make an informed decision on the best strategy.
Make sure you have the answers to these questions before you start a test:- What is the purpose of the test?
- What are the KPIs I want to test?
- What would be considered a success?
- Test 1 variable at a time: As you optimize your ad strategy, you’ll probably find that there are multiple variables you want to test. The only way to effectively evaluate the significance of any change is to isolate one variable and measure its impact. This means, for example, that you can’t set a new network live while also changing the reward amount of a video placement.
- Changes should only be made in group B: Once you’ve identified the variable you want to test, leave your control group (group A) unaltered. The settings in group A already exist as your current ad implementation. Changes should only be made in group B, to challenge the current implementation and compare the results.
- Use instance rates in group B for waterfall tests: When setting up the test, use instance rates for the first 3 days. This will allow the instances in group B to “learn” and stabilize their eCPMs. In combination with the “Sort by CPM” function, this will automatically sort the waterfall by descending eCPM from the start of the test. IMPORTANT – remove the rates at the end of the 3rd day.
Best practices while an A/B test is running
- Monitor the test data: Keep track of your data to make sure that nothing too drastic occurs as a result of the change you made. However, it’s important not to terminate the test for the first 3 days, even if group B performance is lower than group A, because the data might be inaccurate at the beginning of the test.
- Ignore data from the test initiation date, since it includes activity reported prior to the test
- Ignore data from the first 2 days for waterfall tests, since group B includes new instance IDs that need time to learn
- Leave group B alone: Don’t make changes in group B during the test. Every test should address only one assumption and variable during the entire test. If you want to test a different assumption or even make small adjustments to the current test, terminate the test and start another one. This is crucial and will help to keep the data organized and clean for analysis.
- Give it enough time: Each test needs sufficient runtime to produce useful data. A good rule of thumb is to analyze the test results after one week. When testing apps with a low amount of users or impressions, wait for more than a week to gather enough data.
Best practices for analyzing the results
- Look at the right data: Exclude the first day of the test from your analysis. When testing the waterfall setup, exclude the first 2 days.
- Ignore daily data: When testing an assumption, you are aiming for long-term improvement rather than daily changes. Since daily performance can be volatile, analyze the data for the entire test as a whole without breaking it into days.
- Close the loop: Compare the data and the performance of the two groups in accordance with the KPIs you set before initiating the test. Make sure to answer the following questions:
- Did the test reach the goals you set?
- What is your winning group?
- What is the next assumption that should be tested?
- Dig into the data: Try to understand the root cause of why Group B performed better or worse than Group A. The insights will guide you in setting up the next test.
Make an A/B testing routine to keep improving your app performance. Small and incremental changes can quickly add up to drive significant revenue growth, and there’s always room for more optimization. Use the insights from the analysis of the previous tests to understand how you can further improve in your next tests.