With digital tracking breaking, consumer brands that relied on multi-touch attribution are now looking for alternatives to measure marketing effectiveness. The two methods that comply with privacy regulations and can help get closer to true incrementality are marketing mix modeling (MMM) and conversion lift studies (CSL).
Quick context on both:
Marketing mix modeling is a statistical modeling technique that marketers use to determine which channels in their marketing mix deserve credit for sales, in order to reallocate budget to the highest-performing areas.
Conversion lift studies are tests run by marketers to validate what performance would look like if you switched a channel off, or scaled spend up or down.
MMM and CSL don’t operate in silos, and we highly recommend using them together to get a more clear picture. They are very complimentary when done correctly. So, let’s talk about the three ways lift test fit in within MMM:
#1 – Use lift tests to calibrate your MMM
Lift tests are a great way to calibrate your MMM – they give the MMM more information about the ground truth, which will help the model settle on the set of plausible parameters that are consistent with the results that you got from the test.
This can be tricky because the vast majority of MMMs make the assumption that marketing performance doesn’t change over time. If you have two different lift tests with two inconsistent results from one channel, which is very common because channel performance changes over time, it’s not clear which of those lift tests you should use when you’re calibrating your MMM results.
For full transparency, we don’t actually have great solutions for that because, effectively, those MMM platforms are making bad assumptions about the way that marketing performance actually works.
What we do at Recast is, since we have a Bayesian time series model, we’re estimating what the incrementality of every marketing channel is – for every day. That lines up really well with the way lift tests work because we can incorporate those results directly into the Bayesian statistical model by putting priors on the performance of that channel, but only at the time when the lift test was run.
We’re treating the evidence correctly by considering it a snapshot in time and not applying it to how that channel has performed over all of history.
#2 – Use lift tests to proactively test your MMM
Another good way to leverage lift tests is to use them to proactively test the MMM. You should be able to prove outside your MMM that the MMM is right – that’s something we have built within Recast to align our incentives with our clients’. After you receive the results from your MMM, go run a lift test to see if they line up. If they do, you can be more confident in your MMM having picked up true underlying causal signal.
If we get results from the MMM that say Meta is our best channel and radio is our worst channel, we should be able to test that the following month. A lift test will help you see if the model continues to predict accurately and if your overall marketing efficiency actually improves. Recast’s MMM updates weekly so it gives you the opportunity to be more dynamic and run lift tests more often to audit the model.
A quick note here: when an MMM and a well-run experiment disagree, the well-run part being very important, it’s normally the MMM that’s wrong. The model is making a lot more assumptions than the experiment so when you have experimental data that conflicts with your MMM, go take a look at why the model might be wrong.
#3 – Use MMM to get additional insight into incrementality
Another way to think about the relationship between MMM and CSL is to think about MMM as providing additional insight into incrementality for all of the times that you can’t be running lift tests.
Lift tests are a great way to get to incrementality, but they have a couple of drawbacks:
Firstly, they are expensive to run so brands do them sporadically. Most companies have some discrete points in the year when they test a few of their different channels – but it’s only one or two tests a year. Secondly, they are measured at discrete points in time, so they tell us about the performance of the channel at that time, but not necessarily about the performance of the channel in general.
Let’s say you ran a lift test six months ago. You got information from it, but we expect the world to have potentially changed in meaningful ways over the last six months. MMM can pull all of those different pieces of evidence that you have together into one view based on all of the current data that you have.
Your marketing mix modeling framework should be able to take into account the evidence from lift tests, but only as a snapshot of that point in time. If your marketing mix model estimates one effectiveness for every marketing channel’s overall time, you’ll have a problem incorporating those results.
That’s actually a really tricky thing to get right. We have done it at Recast, but it’s a thing to be careful of when you’re selecting an MMM vendor because it’s not easily solvable..
How to combine lift tests with Recast’s model
Our models don’t necessarily propose lift tests directly but, for example, our clients can see what channels MMM says are performing very well but digital tracking says they’re not. Instead of arguing over methodology, we can tell them to run a lift test and verify which method is right. They often see that the lift test looks like the Recast results and they now feel confident that Recast is measuring incrementality.
Another situation where this can happen is if Recast’s results are very uncertain – maybe our client has a few small channels or channels that are highly correlated with other channels. We show the uncertainty and, if we want to get more precision, we propose running a lift test.
At the end of the day, these marketing measurement methods are very complementary, and when done right, you can use the MMM to pull together the evidence that you have from those lift tests in order to get a holistic, continuous view of incrementality for your business.