To measure the impact of bidding in relation to waterfall, run an A/B test to split your audience into two segments. This ensures that changes you make are measured effectively, rather than relying on pre-post tests that might have fluctuating CPMs caused by differences in the number of users, updates to the app, and other factors.
It's difficult to measure success by comparing performance before and after implementing bidding due to the difficulty of isolating other factors; however, A/B testing shows much clearer performance gains from bidding. The value of bidding shouldn't be measured on performance alone, as there are also operational efficiencies when moving to bidding.
If possible, run A/B tests through your mediation or analytics platform, as these platforms are set up to control factors effectively and give accurate results. Refer to your platform's documentation to set up any tests.
If your mediation or analytics platform doesn't support A/B tests, follow the steps and best practices below to help measure the difference between bidding and waterfall.
To attribute revenue uplift to bidding, everything in the experiment needs to be equal except for bidding vs. waterfall.
An A/B test is set up with two groups:
Set up your waterfall for your placements as you would normally would.