Most media mix models can’t be proven wrong. That’s a problem. You’re supposedly using your model to forecast and allocate millions of dollars on marketing spend… and you can’t tell if it’s right or wrong?
The issue is that most models don’t make testable claims – and that needs to change if you want to trust their insights.
In this piece, we unpack a principle from the scientific method that marketing desperately needs to borrow: falsifiability. Why it matters, where most models fall short, and how to spot whether your vendor is giving you accurate insights or not.
What Is Falsifiability And Why Traditional MMM Outputs Are Often Not Falsifiable
At its core, falsifiability is what makes a claim worth taking seriously. If a statement can’t, even in principle, be proven wrong, it’s just a story and you can’t trust it.
So what does falsifiability look like? Say you hypothesize that “cutting Meta spend by 30% will reduce revenue by $150K” – that’s falsifiable. You can make the change, look at what happens, and see if the forecast held up.
But “Meta’s true ROI is 5.7x” – that’s not falsifiable. You can debate it. You can run experiments that suggest a range, but no one can ever confirm with 100% certainty what the true ROI is. The true incremental ROI of any campaign is unknown and unknowable.
Unfortunately, most MMMs aren’t built to be tested either. A common example: your vendor shows you a graph and says, “Meta drove 40% of last month’s revenue.” That might sound rigorous – maybe it even comes with error bars or a confidence interval – but again, it’s not falsifiable.
You can’t go back in time and re-run last month without Meta in the mix. There’s no counterfactual, no empirical way to know whether the claim is true.
This is the central flaw that we see in how many MMMs are built and sold. They produce “narratives” about the past, but those narratives can’t be evaluated. And marketers have no ground truth to check it against. When claims can’t be tested, vendors face no consequences for being wrong – which means the deeper problem is incentive alignment. When vendors deliver unfalsifiable explanations, clients can’t evaluate the work.
You’re left to having to trust their credentials or how beautiful their slides are, instead of what’s actually going to drive your business forward.
Why Falsifiability Builds Real Trust (While Storytelling Breaks It)
One thing we’re clear at Recast is that we will never ask our clients to trust our “expertise” or our “credentials.” Please, don’t!
Trust is built on evidence. And the only way to get that evidence is by making falsifiable predictions that reality can either confirm or reject.
When a Recast model predicts $3.2M in revenue next month, and the actuals come in at $2.1M? We missed. There’s no spinning it. Something clearly was not right about that model. We’re going to get in and fix it and so the model does better on backtests and improves going forward.
We think this creates a fundamentally different power dynamic. It’s not “we’re the experts, trust us” – it’s “here’s what we predict, let’s go verify it together.” Falsifiability improves the model quality but it also aligns incentives and creates accountability.
There’s no excuses. There’s no grading your own homework. There’s no stories or narratives. The scorecard is public. The forecasts are specific. And the data judges the model. We don’t need to ask for trust. We will fight to earn it – week after week, forecast after forecast.
What This Means for Marketers: A Practical Diagnostic
If you’re a senior marketer relying on media mix modeling to guide spend, here’s the test: Is your model making claims that can be proven wrong? To pressure-test your current setup, ask your vendor:
- “What predictions is your model making that we can verify in the next 30 days?”
- “How do you validate model accuracy before deployment?”
- “When was the last time your model was wrong? What changed as a result?”
Pay close attention to how they answer. Do they get specific? Or do they retreat?
Red flags: Emphasis on model complexity. ROI estimates with no uncertainty range. A focus on in-sample fit instead of out-of-sample forecasting. If they’re avoiding forecasts altogether, that’s a sign they don’t want their model judged by real-world outcomes.
TLDR:
- Most MMMs fail a basic test of scientific rigor: their claims can’t be proven right or wrong.
- Explanations about past performance sound precise, but aren’t testable – forecasts are the only way to validate a model.
- Recast builds falsifiability into its process through forward-looking predictions, backtests, and targeted experiments.
- If your vendor isn’t making falsifiable predictions, you’re buying a story.



