Now that digital tracking is breaking and brands are using alternative measurement methods, we will see much more “triangulation” and combining them to get closer to the ground truth.
One great example of this we’re seeing gain a lot of traction is integrating geo-testing and marketing mix modeling.
Geo-lift tests are pretty straightforward: different geographic regions (in the US, marketers often do this at the DMA or state level) are exposed to varying levels of advertising spend to measure the impact on consumer behavior or sales in those areas.
By comparing the performance of these regions against control groups that do not receive the same marketing efforts, you can isolate the incremental effect of your advertising.
Now, what role can they play when combined with marketing mix modeling?
Marketing mix modeling and geo-testing:
Probably the most important question you can ask whoever builds your MMM is: how do we validate an MMM’s results?
There are good and bad answers to this – if they talk about in-sample R-squared, MAPE, or statistical significance and p-values… that’s a big red flag. These metrics don’t help us to actually validate the model.
What you should look for as a good answer to this question is a way to validate the vendor’s results outside of the modeling framework. To validate a model like this, you need some amount of external information to know if we can actually use it to make decisions going forward.
The reason behind this is that MMM models are so complex they tend to overfit and are always going to fit really well in sample or within the modeling framework itself. What you actually care about is whether those results match up to the real world outside of the model or not.
And that’s where geo-lift tests come in–they help you proactively test your MMM.
Again, the idea is to prove outside your MMM that the MMM is right. After you receive the results from your MMM, go run a lift test to see if they line up. If they do, you can be more confident in your MMM having picked up the true underlying causal signal.
One of the strengths of marketing mix modeling is that it can help you tie all of these data points together while taking into account the different levels of certainty you’ll get from each of these tests. Additionally, MMM can incorporate observational data to estimate how present-day performance might be drifting from point-in-time snapshots from experiments.
Theory aside, let’s show a practical example of how this could work:
MMM and geo-testing – a practical application:
Let’s say you want to test if your paid brand search is incremental (an experiment we often recommend brands to run), but you don’t want to do a go-dark experiment and turn off all your spend just in case it’s incremental and the opportunity cost becomes too high.
If you don’t want to turn off your brand search spend completely, you can turn it off in just a few geographic regions (states, DMAs, etc.) and compare to see how much revenue drops (or not) in the regions where you turn off the brand search spend.
The correct way to do this analysis is known as “difference-in-difference analysis” and it’s really not that hard to set up – Google allows you to carve out certain states from your campaigns and you can do a before-and-after analysis to see the true incrementality of your brand search dollars.
You can then compare the results of this experiment to what Recast is telling you about the incrementality of your brand search and see if the numbers align and run in the same direction, or if there are big discrepancies.
Some final considerations:
While geo-testing as a way to test your MMM is undeniably a powerful tool, it’s no silver bullet. There are a few limitations:
- The gold standard of testing remains the individual-level randomized control trial. The issue with geo-tests is that they’re not conducted at the same granular level, which means they could lack the sensitivity needed to detect smaller but impactful effects of marketing.
- We also need to remember that every marketing experiment is a snapshot in time. A test run in February gives us key insights into how a channel performed during that time, but as we move further into the year, that test’s relevance goes down. Everything evolves – your business, the creatives you’re using, the marketing platforms, and even your competitors.
- Geo-holdout tests simply don’t work as well for some channels. Podcasts and influencers for example are difficult (impossible?) to test this way since it’s impossible to control where someone listens to them.
- If your MMM isn’t set up to integrate geo-testing, there is probably a problem of misaligned incentives where the modeler doesn’t want the model to be validated. This is something we value highly at Recast – we want to be tested and make sure we’re not biasing our client’s decision-making with wrong data.
In general, geo-holdout tests are a great way to validate your MMM and we strongly recommend brands to get accustomed to using them. Just remember their limitations so you can adjust your measurement around them.