The MMM Worked. So Why Didn’t Anything Change?

Media mix modeling is a waste of money unless you do something with it. A great model sitting on a shelf is a model that has failed. And this happens too often – the leadership team signs the contract, the analysts get the recommendations, they share them in a meeting, and… nothing happens. 

Budgets don’t change. Channels stay untouched. No experiments are run. Everyone debates the model, but no one owns the decision – and all you get are pretty (but useless) graphs and charts. A model can be perfect on paper and irrelevant in practice.

When it’s well done, MMM is a decision system more than it is a product. The output only matters if it changes what you do next: budget, mix, timing, testing cadence. It should come with a monitoring loop where you can see whether actuals are pacing against forecasts and where you’re deviating from what you expected. And it needs a shared language for uncertainty, so decisions can happen even without certainty.

This article will share how the sharpest teams we work with are able to leverage MMM to its full capacity and the changes – cultural, operational, and tactical – that it requires.

Reframe MMM as forecasting infrastructure

The way most teams talk about MMM is backward-looking by default. It’s a post-mortem/scorecard that gives you a narrative for the last quarter and all it really answers: “What happened?”

But the companies that get leverage from MMM treat it as forecasting infrastructure. The center of gravity shifts to: “What are we going to do, what do we expect, and how will we know if we’re wrong?” 

In practice, that means the model has to answer real planning questions, which are the ones that create tension between marketing and finance:

  • “What’s the most efficient budget that will hit our revenue goals next quarter?”
  • “How do we allocate dollars across channels to acquire the customers we need?”
  • “Are we on track to hit that plan, or do we need to course-correct?”

Those questions are hard because they force commitment. Someone has to put a number on the table, own the assumptions, and accept that outcomes will deviate. The best teams we work with have both marketing and finance using the model in the same meeting. MMM becomes the connective tissue between them so they can plan, act, and evaluate together. Marketing owns the plan. Finance signs off on risk. 

Senior teams don’t need more dashboards. They need a usable forecasting workflow with a plan, set goals, explicit uncertainty, defined budget ranges, and monitored progress in real time to see how actuals are pacing against forecasts and where results are deviating from what you expected. 

“Every plan is a series of bets”: how to build a culture that can act under uncertainty

Every marketing plan is a series of bets. Not just the experiments. Not just the new channels. The whole thing. A marketing bet is putting money on the line for an uncertain outcome. A good MMM makes that risk visible, but that means the organization has to get comfortable making decisions when the answer is a range.

This is simple to say and hard to operate with: you will not find certainty in your mix. When we look at Recast models, I never see a channel with zero uncertainty. Even the most established platforms still have variance. So the goal isn’t to be right every time you move money. The goal is to learn in a way that you win more on the aggregate.

And when the results aren’t positive, the test can’t become a career-limiting event. The best teams pre-wire this. We had a Coffee Break episode with René Tingskov, where he shared how their company manages this:  “every year, we set aside a percentage of our budget that goes into testing only. This is not something we count on to make business right away. But it helps us make sure we have the right setup at all times.” 

When a company gets this framing right, they test more often and get more wins – and they normalize the whole process of experimentation. 

Trust is earned via external validation: how to prove your MMM can guide forward decisions

To be very clear, when we say you should make decisions under uncertainty, we’re not saying you should trust your MMM blindly. You can build thousands of different MMMs where all of them fit your data perfectly, and they’ll still give you 1,000 different answers about what works and what to do next. That’s why measures of in-sample fit like R-squared or MAPE simply aren’t useful for determining if a media mix model is ‘good’ or not.

Before you start following forward-looking recommendations, you need proof that following the model will generally improve outcomes – and you get that through external validation. We run the following tests:

  • Out-of-sample forecasting. Hold out outcome data from future weeks and ask if the model can accurately predict what happens to the business. Treat this as a standing health check and run it continuously.
  • Backtesting decisions. Simulate what would’ve happened if you’d followed past recommendations. Those recommendations are the model’s forward-looking outputs: the channel allocations, budget shifts, and spend changes it suggested at a given point in time. The question you’re asking is simple – if we had actually moved money the way the model told us to, would we have come out ahead?
  • Live intervention tests. Change your budget based on model guidance and see if the model remains stable and accurate. This creates a falsifiable hypothesis that can be proved or disproved in the real world.

TLDR:

  • MMM doesn’t create value by being “right.” It creates value only when it forces a concrete budget decision (with ranges), a forecast, and a loop to course-correct.
  • Every marketing plan is a series of bets; the winning orgs don’t seek certainty, they institutionalize learning with a dedicated test budget and executives who can absorb “non-positive” results.
  • Trust has to be earned via external validation (holdouts, backtests, and live interventions).
  • The unlock is an operating mode: marketing owns the plan, finance owns the risk, analytics owns integrity.
Scroll to Top