Why Forecast Accuracy Matters More Than Attribution Accuracy in MMM 

There are two types of “accuracy” under media mix modeling, and it’s easy to confuse them, which leads to many marketing teams not having an aligned measurement program. 

The first is attribution accuracy: how the model assigns credit across channels in historical data. This is the world of decompositions and contribution charts, and it gives you the model’s best explanation of “what happened.” 

It’s where you get statements like: “Facebook’s true incremental impact is 5.7x.” You can argue that’s wrong. Someone else can defend it. Entire teams can debate methodology, time windows, and assumptions – and still walk away without a decision.

The second is forecast accuracy: how well the model predicts future outcomes given specific spend inputs. This is dollars in (ground truth) and revenue out (ground truth). 

It’s the question executives actually need answered in planning: If we spend this way, what happens next? Can we generate $50M in revenue next quarter? Are we going to hit our revenue goals next quarter with this given budget and mix?

These two outputs support very different decisions: 

Attribution can help you form hypotheses around where you might be over-invested, what’s saturating, and what’s showing diminishing returns. They can be useful, but they’re also easy to over-trust because it feels precise.

Forecasting is what turns MMM into a planning tool: “What’s the most efficient budget that will hit our revenue goals next quarter?” “How do we allocate dollars across channels to acquire the customers we need?” “Are we on track or do we need to course-correct?”

The problem is using attribution accuracy to make forward-looking decisions. If you’re trying to set next quarter’s budget, you can’t be debating attribution. You have to do it through forecasting.

Falsifiability: the difference between debatable answers and provable ones

One problem in marketing science is that people make claims that can’t be proven or disproven. They sound rigorous, but they’re structured in a way that makes it impossible to hold them accountable. You can always debate methodology, experiment design, and time windows. You can always say the test didn’t run long enough, or didn’t capture long-term effects, or didn’t control for the right thing. And they don’t help anyone.

This is the opposite of how science works. Science progresses through falsifiability – making claims that can be proven true or false, so someone can test your hypothesis, disprove it, and put out a new one. That’s how you get a flywheel: hypothesis → test → update → better hypothesis.

And that’s why forecasting matters so much in MMM.

Forecasting is falsifiable. If you say, “We’ll generate $50M in revenue next quarter,” there will be a fixed number that proves that true or false. You can evaluate very precisely whether the forecast was accurate or not. It forces a feedback loop. Predicted vs. realized. Over-predicting vs. under-predicting. Too confident vs. too wide. And it creates real accountability for the system you’re trusting to guide million-dollar decisions.

If the result can’t be falsified with ground truth, it’s not a reliable basis for reallocating real budgets.

What “forecast accuracy” actually measures (and why calibration matters)

A good forecast has to do two things at once: get close to reality and be appropriately confident – not too wide, not too narrow. Otherwise, you end up with the worst of both worlds: a forecast that feels precise enough to act on, but isn’t stable enough to trust.

This is why we operationalize forecast accuracy as a continuous validation process:

Every week, we take versions of Recast models from 30, 60, or 90 days ago – before they saw recent data – and ask them to make a forecast: given some amount of marketing spend per channel, how much of a KPI (revenue, acquisitions, app downloads) will be driven? Then we compare those forecasts to what clients actually realized over those time horizons.

We score this with CRPS (Continuous Ranked Probability Score), which scores both the accuracy and the uncertainty. Did the forecast get close to reality? And was it appropriately confident?

One nuance that matters a lot is leakage control. We intentionally exclude variables that can hint at performance but that the model couldn’t have known in advance (ex. branded search spend). If you let the model peek at signals you won’t have when you’re making decisions, you’re inflating its trust.

A balanced measurement system: where attribution still helps, and how forecasting keeps you honest

Attribution still has a job. It’s a useful way to form hypotheses: where might we be misallocating, what looks saturated, what’s plausibly under-funded. But it’s rarely the final decision maker. A mature MMM program ends up with a simple hierarchy:

  • Attribution informs questions (where to look, what to pressure-test).
  • Forecasting validates usefulness (does the system reliably translate spend plans into outcomes).
  • Experiments/interventions adjudicate the biggest bets (the places where you can’t afford to be wrong, or where attribution can’t settle the argument).

TLDR:

  • Attribution “accuracy” is easy to debate; forecast accuracy is falsifiable – dollars in and revenue out are ground truth, so you can prove the model right or wrong.
  • If your MMM can’t forecast next month’s performance on unseen data, it’s not trustworthy for today’s budget moves.
  • Agood forecast is appropriately confident, with uncertainty that’s not too wide or too narrow.
  • Use a hierarchy that keeps teams honest: attribution generates hypotheses, forecasting validates usefulness, and experiments/interventions settle the biggest bets.
Scroll to Top