10 Ways to Build a Bad MMM (Marketing Mix Model)

Building a Marketing Mix Model (MMM) that actually works and you can trust isn’t easy. A great model can radically change your company – how you forecast, where you allocate your budget, how you think about incrementality…– but a bad one can lead you to make very biased decisions. 

As more companies are now looking into MMM as part of their marketing measurement stack, we think it’s very important to look at the ways in which a bad model can be built—and how to avoid them. 

These are the 10 most common mistakes we’ve seen when building a media mix modeling for consumer brands: 

1. Failing to Validate the Model’s Results

If your model doesn’t have a clear way to valide the results, it’s a bad model. Period and full stop. This might be the most critical mistake and the biggest red flag you might find.

Crucially, you should be able to results outside the modeling framework. 

A bad MMM vendor might say, “Trust the statistics” and lean on measures like R-squared or p-values to justify their results. Again, red flag. 

Good modelers know that complex models can fit data well within the modeling framework but fall apart in the real world. 

Without backtesting, experiments, or lift tests to verify the predictions, you are left with a model that can (and probably will) steer you in the wrong direction. 

Always ask your modeler: How will you validate the model’s results in a way that reflects reality?

2. Post-Processing Manipulation of Results

Some vendors engage in pre- or post-processing of the model results to make them look more favorable. 

This can involve techniques like “freezing” coefficients so that the model output aligns with past expectations, even if those expectations no longer reflect the current business environment. 

This might seem helpful in the short term, but it leads to biased results that do not reflect ground truth. When modelers are allowed to manipulate results after the fact, it destroys trust and compromises decision-making. 

Your MMM should reflect reality, not what someone thinks the executive team wants to see.

3. Treating All Marketing Channels the Same

One of the worst practices in MMM development is treating every marketing channel as if it operates at the same level of the funnel. 

For example, branded search and affiliate marketing are often misrepresented as being directly responsible for conversions, when in reality they are correlated with conversions that have already happened or are about to happen. 

A bad MMM will give too much credit to these channels without accounting for the true causal structure.

4. Ignoring Carry-Over Effects (or fully understanding how they work

Every marketing activity has lingering effects that can last beyond the period of direct investment. For example, spending on TV ads today might continue to influence consumer behavior for weeks or months. 

A bad MMM ignores these carry-over effects, starting with a clean slate at the beginning of the dataset and assuming that all marketing activity from previous periods has no impact on future sales. This results in an omitted variable problem, leading to biased or inaccurate results. 

At Recast we use a technique called “burn-in period,” which addresses this by allowing past marketing activities to influence future outcomes without distorting the overall model.

5. Relying on Impressions Instead of Spend

Many traditional MMMs use impressions—rather than marketing spend—to represent channel activity. 

This is a flawed approach for several reasons: impressions are not measured consistently across platforms, can change over time, and do not directly reflect business outcomes.

Also, impressions are often a vanity metric: businesses can rack up impressions without driving any real value. 

A good MMM should focus on spend, which is closely tied to the bottom line, rather than intermediate measures like impressions, which only muddy the waters.

6. Incorporating Bias Through Data Transformations

A particularly sneaky way to build a bad MMM is to introduce bias by applying arbitrary data transformations before running the model. 

For example, assuming that display ads only have a short-term effect while TV ads have a long-term impact can create a biased model that aligns with preconceived notions, rather than reality. 

Such assumptions lead to misspecified models, as the model’s structure predetermines which channels appear most effective. 

A good MMM, by contrast, lets the data speak for itself, using rigorous statistical methods to identify true drivers of marketing performance.

7. Over-Reliance on Automated Variable Selection

Many MMM vendors rely on automated variable selection techniques, such as stepwise regression, LASSO, or ridge regression, to decide which variables to include in the model. 

While these techniques can be useful for certain types of analysis, they can also introduce bias by eliminating variables that may not appear statistically significant but are nonetheless important for understanding causal relationships. 

Vendors should be transparent about which variables are included and why, using causal inference methods to justify their decisions.

8. Neglecting Incrementality Tests

Another red flag of a bad MMM is not incorporating incrementality testing in its validation process. 

Without these tests, it’s very hard to know if your model’s predictions are truly accurate or if they are simply capturing correlations that don’t hold up in practice.

A good MMM framework should be designed to incorporate these tests and update results dynamically as new information comes in.

9. Ignoring Non-Linearities and Saturation Effects

Another major flaw in bad MMMs is ignoring the non-linear relationships between marketing spend and sales outcomes. 

For example, spending $1,000 on a channel might produce good returns, but spending $100,000 or $1 million in that same channel could lead to diminishing returns.

Not accounting for this saturation effect leads to inaccurate recommendations – you’ll see that your model might suggest increasing spend in channels where they have already hit a point of diminishing returns. 

A well-designed model should be able to identify, account, and adjust to these.

10. Trusting Vendors Without Internal Understanding

Yes, this also works if you’d like to work with us here at Recast – outsourcing your MMM to a specialized vendor is often a good idea, but you should have some internal understanding of how it works, its methodology, assumptions, and limitations, how to read its results, etc. 

Vendors can fall into the trap of telling clients what they want to hear, rather than what the model is actually saying. While it’s tempting to hand off all responsibility, this is a recipe for disaster. 

Upskilling internal teams or collaborating with finance and data science departments are some of the things we always recommend to our clients to help get the most out of their MMM.

TLDR:

10 media mix modeling red flags to avoid:

  1. Failing to Validate the Model’s Results
  2. Post-Processing Manipulation of Results
  3. Treating All Marketing Channels the Same
  4. Ignoring Carry-Over Effects (or fully understanding how they work)
  5. Relying on Impressions Instead of Spend
  6. Incorporating Bias Through Data Transformations
  7. Over-Reliance on Automated Variable Selection
  8. Neglecting Incrementality Tests
  9. Ignoring Non-Linearities and Saturation Effects
  10. Trusting Vendors Without Internal Understanding

About The Author