Don’t Get Misled: How Vendor “Refreshes” Hide Outdated MMMs

One of the most overloaded concepts in media mix modeling right now is “model refresh.”

Most senior marketers hear that word and reasonably assume it means the model has been updated to reflect reality. New data went in. Old assumptions were challenged. Channel performance was re‑estimated.

That’s what a refresh should mean. But in practice, many vendors mean something very different. This article will cover the different “definitions” of model refreshing, why they matter to you in practicality, how they will affect your MMM, and the questions you should be asking your vendors to make sure your model is built with rigor.

What Media Mix Modeling Is (And What It Is Not):

Here’s the distinction that actually matters. What a real MMM refresh is (what we do at Recast):

  • The model is fully retrained.
  • All historical data is shown to the model again.
  • Every parameter is re‑estimated from scratch.
  • The model has no memory of prior runs.
  • If performance shifts, the model is allowed to learn from that signal.
  • If historical data is corrected, the model learns from the corrected data –not the old, wrong version.

What we’re seeing many vendors call a “refresh”:

  • The model is trained once.
  • New weeks of data are stacked onto the end.
  • Parameters are frozen from six, twelve, or even twenty‑four months ago.
  • Outputs are recalculated using those old assumptions.
  • If performance changes, the model does not learn.
  • If data is corrected, the model does not learn.

Many vendors are intentionally vague about which of these they’re doing. They say “refresh,” knowing most marketers will assume retrain. We think this is a horrible and dangerous practice, but why should you care about this at all?

Why This Matters to Your Marketing Budget

This matters because the core job of an MMM is to learn the relative effectiveness of each channel. If the model can’t relearn that as conditions change, it will move further away from the truth while still producing very confident-looking dashboards. 

And we all know that marketing performance changes constantly: channels decay, creative fatigue sets in, pricing shifts, new products launch… If your MMM can’t absorb that signal, you can’t trust it to allocate your budget – and that’s exactly what frozen models do.

There’s another, less obvious reason retraining matters: a fully automated, frequent retraining process makes it much harder for analysts to put their thumb on the scale. When a model is re‑estimated end‑to‑end each time, there’s far less room for someone to bias the results toward what they want to see. 

And one last but very practical reason: data corrections. Anyone who’s worked with real marketing data knows this happens all the time. A pipeline error is discovered. Spend was overcounted. A channel was misclassified. When that happens, you want the model to learn from the corrected data – not the bad data. If parameters are frozen, that correction never actually makes it into the model’s understanding of the world.

With frozen models, if your team fixes a known error in the data pipeline – maybe Meta spend was double-counted – the model won’t update. If a new campaign underperforms, but the model still believes the channel delivers a 5x ROI, it won’t tell you to shift budget. If CPMs drop, or new creative performs very well, the model won’t adjust.

That’s why at Recast we recommend retraining as frequently as possible – typically weekly. It’s the only way to produce recommendations that reflect what’s actually happening in the business right now.

A “Stable” Model Isn’t One That Stays the Same. It’s One That Learns Gradually

Model refreshes are very much linked to model stability. When marketers hear “model stability,” they often assume it means the results don’t change much week to week. But that’s not actually what you want.

True stability means your model can take in new data and adjust to it without swinging wildly. One week of new data shouldn’t cause massive changes in your channel ROI estimates – but it also shouldn’t do nothing.

Unfortunately, what we’ve seen some vendors do to ‘fix’ this is freeze coefficients. They lock in their model’s assumptions and ignore new data. Obviously, the model is only stable when frozen in appearance. It’s also very much broken.

At Recast, we run something we call a “Stability Loop” where we simulate weekly refreshes and check that adding a single new week of data doesn’t create unreasonable swings. If it does, the model doesn’t go into production.

This lets us verify that the model is:

  • Picking up consistent signal (not overfitting to noise)
  • Robust to small changes in the time range
  • Still able to adapt when marketing performance genuinely shifts

Questions to Ask Your Vendor Today

One of the challenges with fake model refreshes is that you won’t see it unless you ask: the dashboard will still show clean graphs and the forecasts will still be there. The model won’t show instability either (because it literally can’t).

When you’re using these models to allocate millions in media budget, following a confident but drastically wrong model is way worse than not doing MMM at all.

Of course, most vendors won’t admit they’re freezing coefficients – but you can still find out if you ask the right questions:

  • “When you refresh the model, do you retrain all parameters from scratch?” If the answer isn’t an immediate and confident “yes,” that’s a red flag.
  • “What happens to your coefficients when you add one week of new data?”
  • “How do you handle historical data corrections?”
  • “Do your analysts manually adjust or override model outputs?”
  • “How do you validate model stability over time?”

Also, look out for language that avoids the core issue:

  • “We just run data through the model.”
  • “We drop channels that don’t hit p-values.”
  • ”We pick the best model from thousands.”
  • “We smooth results to maintain consistency.”

A trustworthy MMM shouldn’t be a black box.

Scroll to Top