Through countless conversations in the media mix modeling space, we’ve seen the same misconceptions hold models back again and again. It’s not that we “disagree” with them philosophically – it’s that they’re ideas that we’ve seen collapse under real-world data and scrutiny.
In this article, we’ll break down the four MMM myths we encounter most often, and share how we approach solving them at Recast.
Myth #1: You can fix multicollinearity in MMM
TRUTH: collinearity reduces the total amount of signal in the model. There is no “fix” – just tradeoffs to be accepted.
Collinearity is one of the most fundamental statistical challenges in media mix modeling, and one of the most widely misunderstood.
At its core, when two channels move together – say, you scale TV and radio at the same time every quarter – the model loses its ability to distinguish which channel actually drove incremental results. The signal driven by the two channels overlaps and it reduces the total amount of signal in the model.
There is no real ‘fix’ for this.
You can simplify the model and estimate the combined ROI of both channels, but then you’re giving up granularity. Or you can use automated variable selection techniques like ridge regression or LASSO – but then the model will arbitrarily assign credit to one variable over another, not based on incrementality, but on which happened to make the model fit.
The best MMMs expose collinearity, quantify its impact, and help decision-makers make smarter tradeoffs in the face of limited signal.
The one thing you can do and it’s what we recommend here at Recast when collinearity becomes problematic is to inject variance in your spend between those channels. Place a bet on one of the channels and give the model more signal.
Myth #2: MMMs can measure long-term brand impact
TRUTH: most MMMs can’t measure long-term brand impact – and certainly not through an adstock parameter.
You spent millions of dollars on a brand awareness campaign last quarter – great! But did it work? That’s one of the most challenging questions in media mix modeling.
First, the data on awareness and consideration is noisy. Unlike sales data, which is often precise and high-frequency, brand metrics are typically based on surveys.
Even the best surveys have a lot of errors and are conducted weekly or monthly at most. That makes it so hard to reliably measure small changes in brand awareness driven by advertising.
Second, brand effects happen over extremely long timeframes. Connecting the dots from a TV campaign today to a change in sales driven by awareness years down the road is a nearly impossible task for MMM – or any measurement method, really.
So, what can MMM reliably measure?
At Recast, the modeling horizon is typically around 120 days. Beyond that, uncertainty balloons. Beyond that, we use context variables – like brand awareness and consideration – to track how shifts in perception influence marketing effectiveness in the medium term.
These contextual variables act as multipliers in the model, lifting or suppressing advertising performance at once. They help us understand how changes in brand metrics influence outcomes like sales or new customer acquisitions.
But even then, we’re not claiming to measure the direct effect of a TV spot on sales a year later. We just focus on measuring the effect of awareness and consideration on sales, not the effect of specific channels on awareness and consideration.
Now, measuring how individual channels, like TV or out-of-home, impact awareness and consideration? That’s a much tougher problem.
While some claim to have solved it, the reality is that it’s an area that needs a lot more research and development. We’d rather be transparent on what MMM can and can’t do.
Myth #3: MMMs can measure creative-level performance
TRUTH: the more granular you slice your variables, the less reliable your estimates become.
MMM is a top-down model. It runs on aggregate data, like national or DMA-level spend or impressions and revenue.
And for the model to be able to estimate the relationship between the marketing activity and the outcome like sales or conversions, that input (whether it’s a channel or a creative) has to drive a real, statistically visible change in revenue that’s larger than the random noise in the system.
That is, if daily revenue just randomly bounces around by a few thousand or a few tens of thousands of dollars a day, then the MMM is not going to be able to confidently measure the lift of any marketing activity that brings a smaller effect than that.
With individual creatives driving real, statistically visible change almost never happens.
Changing your CTA text or creative background probably doesn’t move revenue more than your business’s natural noise, so the model can’t isolate the impact.
Some vendors try workarounds:
- Dividing channel lift across creatives by spend. Which is fine, as long as we’re clear: that’s arithmetic, not modeling.
- Making assumptions to fill in gaps. This can also be okay, if you’re on board with those assumptions.
But the model is still not estimating the creative-level effects.
Myth #4: MMMs shouldn’t be used for prediction or forecasting
TRUTH: if your MMM can’t forecast the future, it shouldn’t support decision-making today.
The only reason to run an MMM is to help us decide how to adjust our marketing budgets in the future. Implicitly, in order to help us do that, the MMM must be able to predict what will happen in the future under different circumstances.
If you’re using your MMM to reallocate budget then, you are, by definition, making a forecast. You’re saying, “based on this model, I believe option A will outperform option B.” That’s a prediction.
To be clear: forecasting doesn’t mean “getting the future exactly right.” It means generating falsifiable hypotheses.
If the model says “cutting Meta by 20% will drop sales by 3%,” you can test that. You can observe what actually happens, and hold the model accountable.
If an MMM can’t consistently forecast changes in outcomes based on changes in input, it can’t support the kinds of decisions marketing leaders actually need to make – and it becomes just an expensive recap of what already happened.
So where does that leave us?
MMM can be a powerful decision-making tool, but only if it’s grounded in reality. At Recast, we’d rather be honest about what the model can and can’t do than sell a black box that fails you.
So if you’re considering different MMM vendors, we’d recommend asking them:
- How do you handle collinearity when channels move together?
- How far out can your model reliably measure brand effects?
- Can you measure creative-level impact – and if so, how?
- How accurate are your forecasts when tested against real results?
If they can’t give you clear, data-backed answers, that’s your red flag.