Optimisation tips

Case Study: Multivariate testing done with Relevant Yield

  • July 7 2023
  • Thuy Ho
Case Study: Multivariate testing done with Relevant Programmatic

Before we became an AdTech company, Relevant Digital provided programmatic management and sales services to over 150 Finnish publishers since 2012. Along the way, we have been relentlessly identifying and combating challenges in programmatic advertising, from technological complexity to operational inefficiencies, and lack of transparency.

Achieving advertising revenue growth requires constant innovation and experimentation; it's never just about driving more demand or resorting to creative strategies. Every day, our Relevant Programmatic team tries to find creative ways to increase revenue for our publisher clients by continuously testing their hypotheses. 

 

Our testing philosophy

A successful test starts with asking the right question. Whatever you suspect, it might be worth putting it to the test. Cosmically, almost everything has a direct impact on revenue. So it takes experience and knowledge of your own stack to set the right priorities.

It's important to test one thing at a time so you know if it is the cause of your problem. Consider a small portion of traffic with different prices but the same set of SSPs, the same placement or the entire website.

Another note when testing is the difference in hypotheses. An important aspect of testing is to determine whether the observed differences between variants are statistically significant or simply due to chance. It is also important to consider the practical significance of the observed differences. Small differences in performance may not have a meaningful impact on key metrics or user experience, making them less valuable for decision-making or optimisation efforts.

Conducting tests without the right tool requires a great deal of time, effort and resources. Focusing on small differences that may not yield significant improvements can divert resources from testing more meaningful variants or optimising other aspects of the user experience.

 

Our realisation

Along the way, we have found that the traditional process can be cumbersome and time-consuming, even with GAM. This unfortunately discourages professionals from testing because the results may or may not justify the costs of human resources and operational costs.

This is why we have taken the initiative to develop an advanced Multivariate Testing feature in Relevant Yield - our one-stop shop for sell-side programmatic operations. A product with purpose. Relevant Yield offers practical features that help sellers solve real problems.

How do we overcome the limitations of GAM in A/B testing with our home-grown Relevant Yield tool? Let us dive into a little monetisation project from our Relevant Programmatic team to learn more about not only Relevant Yield's testing capabilities, but also our testing philosophy. If you agree with us, let's have a chat!

 

Our monetisation project

Publisher’s Profile & Test Scope:

  • Desktop site A (6 million visits/month). Tested on some placements.
  • Mobile site B (1 million visits/month). Tested on some placements.

Test Period: January - June 2023.

 

The tests we conducted and how we conducted them:

  1. Floor prices: Let’s say your floor price is now at 0.4€, it would be interesting to test how a price of 0.1€, 1€, or 1.5€ would affect you.

    This type of test is challenging to conduct using Google Ad Manager as the platform only allows one rule per experiment. This means, that for this test to be efficient, specialists have to set it up like:

    Experiment 1: Initial price 0.4€ and variation floor of 0.1€ for 10% of the traffic.
    Experiment 2: Initial price 0.4€ and variation floor of 1€ for 10% of the traffic.
    Experiment 3: Initial price 0.4€ and variation floor of 1.5€ for 10% of the traffic.

    Each experiment runs for X amount of time, i.e. after 3X time the results are evaluated and compared to determine which price point delivers the highest viewability and fill rate %.

    This works, but it costs A LOT of time, not only to set up but also to monitor.

    Floor price testing
    (Example of the error shown on top on GAM when you try to set up simultaneous experiments)


  2. Client-side VS. Server-side: Some SSPs perform better on the client-side, others do not. We conducted a test on this topic in April, check it out here. Usually, developers are needed for this test as it involves coding. Spoiler: we didn't need one 😊.

    What if I told you it takes 2 minutes to set up? Everything from the variants to the dimensions for monitoring.


    Wait, you are not running server-side because it requires too many resources? I recommend you check out the guide How to get started with Prebid Server.

    To configure an SSP, developers need to dig into the site's script and make changes to the code. This task is time-consuming and risky, because human error can happen at any time and affect revenue. Not to mention that developers are extremely busy, so you may have to wait for days or weeks to get this task done. Of course, if you have a team of developers at your disposal, time won't be an issue, but costs may be. 

    Compared to the Floor price test, this consumes more human resources and carries more risks. Wouldn't you want to avoid this so that the time and intelligence of your teammates can be invested in revenue generation instead?


  3. Prebid Time-out: With this experiment we want to find the sweet spot between maximising bid responses and minimising page load times in header bidding.

    The average timeout was 700 ms so we tested that against 800 ms and 900 ms. Each variant took up ⅓ of the traffic and ran simultaneously.

    By experimenting with different timeout durations, publishers can optimise their bidding process and improve overall performance. Longer timeouts allow more time for bidders to respond, increasing the likelihood of receiving more bids and increasing competition. On the other hand, this can also lead to slower page load times and potentially frustrate users. 

    By testing shorter timeout durations, you can determine if reducing latency improves the user experience without sacrificing revenue. It's essential to consider network conditions and device capabilities when testing to ensure optimal performance in different scenarios. 

    By monitoring performance and troubleshooting, publishers can address any issues that arise and fine-tune their setup for better results. Conveniently, for each variant tested in Relevant Yield, a measuring dimension is created automatically to ensure the experiment is monitored correctly with you doing the least.

Our conclusion

Don't let the resource-intensive nature of testing hold you back. At Relevant Digital, we believe that every test, regardless of the outcome, brings valuable insights. We understand that measuring success can be challenging, but even identifying what doesn't work is also a significant achievement in itself.

Especially with Relevant Yield by your side, you no longer have to worry about time-consuming and complicated testing procedures. We're here to make it a thrilling adventure, where each hypothesis tested brings you one step closer to advertising greatness.

Get ready to test, optimise and conquer the world of programmatic advertising with Relevant Yield! Contact us today and let the fun begin!

Share on:

Leave Your Comment Here