Why Freezing Coefficients in an MMM is a Bad Practice

If you’re freezing coefficients in your media mix model, your model is broken.

It’s one of the most common, and damaging, shortcuts in MMM. Something looks off in your results, so instead of investigating the cause, someone manually locks in a coefficient they “trust” and reruns the model until things look more reasonable.

But the whole point of having a model is to let it tell you the results, not the other way around.

In this article, we’ll walk through:

  • Exactly why freezing coefficients undermines the integrity of your MMM.
  • How this practice introduces hidden bias.
  • What you should be doing instead to build stability and trust in your model results.

Frozen Coefficients Are Lies

No one—not your vendor, not your data science team, not even Recast—should be arbitrarily shaping the results of your model.

And yet, we keep hearing about vendors manually overwriting the results of a statistical model just to make it align with a previous version. Or to avoid telling stakeholders that the new results don’t match last quarter’s narrative.

That’s not helpful. That’s lying. Full stop.

Sure, this usually isn’t done maliciously. It’s framed as “helpful.” But freezing coefficients leads to misleading results and breaks your model’s integrity. This will blow up in your face later when you’re making multi-million-dollar decisions with bad statistics.

The bigger problem here is incentives. If your vendor knows you don’t fully understand what’s happening under the hood of your model, they can quietly mess with the results. To smooth out a spike, or to avoid explaining why the results are changing. 

And with a complex enough model, you can make it say just about anything. But that’s not measurement. That’s manipulation.

What You Actually Want: Transparency and Stability

Marketers aren’t wrong to be spooked when MMM results swing wildly. A sudden change in ROI is definitely a red flag. Shifting model forecasts are hard to plan around.

But freezing the coefficients is a cover-up, not a fix.

What you should rely on instead are transparency and model stability. These are how you build trust in your MMM without manipulating it.

At Recast, we’ve built extreme levels of transparency into the platform so that every stakeholder can have views into modeling assumptions, predictive performance, and weekly shifts. That way, no one has an incentive to “futz with the results”, because this futzing would be visible and obvious.

Each week, Recast runs a series of backtests that simulate what the model would have predicted if it had been trained on data from 5 or 6 weeks ago. Then, we measure how much the results change week to week.

For each channel, we examine the overlap in uncertainty intervals. A high overlap means the model is resilient to small changes in the data—i.e., it’s trustworthy. This process also weights each channel based on their relative impact.

The important point here is that model stability is tested each week, and if underlying changes to the MMM are needed before subsequent refreshes, they’re done with full transparency.

If you suspect your model’s being “messed with” by your vendor, ask them this: “Is there any pre- or post-processing happening in the pipeline that influences our model results?”

If their answer isn’t clear, you’ve got a problem.

If You’re Freezing Coefficients, Why Bother Modeling?

Your MMM should be a source of truth—not a narrative support tool.

If you’re hard-coding answers, you’re not doing measurement. You’re crafting fiction.

Stability doesn’t come from freezing your model in place. It should come from rigorous model-building and validation, and utter transparency into these processes.

That’s how you build trust in your MMM without needing to manipulate it.

About The Author