Quite often we encounter people who have been working with legacy MMM vendors for 15+ years who are shocked when we tell them that Recast is designed to refresh the MMM model every week. While some people see the obvious value in always having the most up-to-date data to work with, others want to know what the advantages are of updating the model weekly as opposed to quarterly (or even less frequently).
We see a few advantages that come with updating weekly:
- More useful:
- Avoid surprises
- Improve actionability
- In-flight optimization
- More trustworthy:
- Easier validation
- Increased transparency
In this blog post we’ll discuss why weekly refreshes would be useful, even if you weren’t planning on making changes to your marketing budget every week. In addition to that, weekly model refreshes lead to a more trustworthy model, giving you more conviction to use it to make larger investment decisions. In the following sections we’ll talk about each of these advantages one-by-one.
Why Weekly Refreshes are More Useful
One of the problems with the once-quarterly MMM refresh cadence is that marketers can get surprised by the results at the end of the quarter. Marketers went all quarter investing in channels the MMM vendor said were the most effective and then all of the sudden at the end of the quarter the company has missed their revenue targets and the MMM vendor comes back and says “oops, it turns out TV wasn’t actually as effective as we had thought last quarter”.
Instead of thinking about “weekly” refreshes, one way you might think about it is “on-demand refreshes”. What if you could see the updated results of an MMM any time you wanted? Maybe you only check the results once a month during the summer, but you check it every day during the holiday season.
An MMM that’s refreshed more regularly allows marketers to check how they’re pacing against goals and avoids surprising updates at the end of the period.
When we were interviewing CMOs about their use of MMMs prior to starting Recast, one thing we heard consistently was that CMOs hate that they spend 6 months waiting on their MMM results and then another 6 months trying to explain why the MMM results don’t drive the promised returns when put into practice.
MMM results that are only delivered once a quarter and are three months out of date when delivered are inherently unactionable. The marketing world moves fast these days and relying on data that’s 3-months stale means marketers are not going to feel comfortable actioning off of those data.
Weekly MMM results are inherently more actionable because they can pick up the most recent changes in the business and in the broader world. If TV performance is improving due to new creative and a better targeting strategy, you want to be able to action off of that immediately, not wait four months to make a change since by that time the CPMs on might have increased.
Many brands have critical times during the year when they’re running big marketing campaigns. This often involves a large multi-channel investment during a fixed period of time. It’s incredibly frustrating for marketers to run the whole campaign for 10 weeks and then wait another 4 weeks for the MMM results to come back to see how they did.
Instead, with weekly refreshes, marketers have the ability to 1) evaluate how the campaign is performing in real-time and 2) dynamically optimize the spend during the campaign. A weekly MMM could help you identify that radio spend is under-performing and TV spend is doing excellent, so by week three of the campaign you can start reallocating budget from the original plan in order to double-down on TV.
When you’re in the middle of a big campaign, you want the real-time results so that you can make adjustments in-flight rather than having to wait months for your scorecard.
Why Weekly Refreshes are More Trustworthy
With a constantly-updating MMM marketers can verify the results through use of the tool. If the model says that one channel is underperforming, marketers can cut spend to that channel for just a few weeks and see the results. In theory, if spend is cut to that channel and revenue doesn’t change much then the model will be proved correct. If spend is cut and revenue drops a lot, then the model should be proved wrong and marketers should be able to see updated results.
The idea of actually using the MMM interactively is a powerful new framework that is only possible when the model is updated sufficiently frequently for marketers to be able to see the feedback loop between their marketing decisions and the model results.
Sadly, lots of MMM vendors do not follow statistical best practices when trying to build a model for their customers. They use practices like data-dredging or p-hacking in order to “find” signal that is not really there in the data. They include lots of irrelevant variables in the model and over-fit the data in order to optimize for fit metrics like R-squared or MAPE. All of these techniques can make model results “look good” in a one-off meeting but they mean that the model results aren’t actually accurate and can cause businesses to waste huge amounts of money based on these misleading results.
When the model updates every week, it’s very difficult to cheat on the modeling front just because it would simply take too much work. When you have months to prepare for a one-off meeting you can hide the details in the footnotes of certain slides or have an analyst use data-dredging techniques to get the results you want. If you have to produce new results every week, you simply won’t have time to do that and so you’re effectively forced to use honest modeling techniques that produce consistent results over time.
The combination of transparency and verifiability are a powerful combination for ensuring that your MMM results are actually useful and more than just statistical smoke-and-mirrors.
Weekly refreshes of an MMM are incredibly beneficial. In addition to allowing for in-flight optimization, they make the MMM results much more actionable and verifiable and prevent a lot of statistical shenanigans that lead to bad and misleading results.