*“How do we know if the model is right?”*

That’s one of the most important questions in MMM.

Most MMM models are black boxes, but business decisions can’t be based on faith alone. How do you distinguish credible results from vendor promises? How do you actually know if you’ve built a good model?

MMM is a powerful tool for marketing measurement, planning, and forecasting. But, as we all know: with great power comes great responsibility. It’s our responsibility as marketing scientists not just to build a model, but to build a model that can be trusted and used.

It’s really easy to build a bad marketing mix model, and incredibly difficult to build a good one.

When you’re building an MMM, you have to remember: there are millions of ways for the model to be wrong, and only one way for it to be right.

That’s why your model validation process is so important. It allows us to prove to ourselves, and others, that our model can be trusted.

**This article will touch on:**

- Why the problem of validation is difficult in the context of MMM
- The problems of over-fitting and misspecification
- Why validation requires multiple different approaches and angles
- Continuous vs static validation
- The most important methods for validating MMMs

So let’s dive in. First:

## Why is the problem of validation so difficult in the context of MMM?

MMM is, fundamentally, a ** causal inference** problem.

We want to understand how different changes we might make to our marketing budgets will change our business performance.

This is not simply a “prediction” problem, but rather an attempt to understand the true, causal relationships between our marketing activity and our business outcomes.

Validating causal inference models is much, much more difficult than validating simple prediction-only models, and so we need a different toolset, and approach, to validating these models.

The fundamental problem is that the thing we care about, the true incremental impact of an additional dollar spent on some marketing channel, is *unknown and unknowable.*

No one knows, or can know, the true value of an additional dollar spent on Meta — there is no fundamental law of physics or nature to fall back on, and there’s no way to ask people or track them sufficiently well enough to know what that true impact is.

So, our job as modelers is to try and validate what we’ve learned from our model, without being able to know what the true answer really is – that is the fundamental problem of validating MMMs.

Beyond just the basics of doing causal inference, the MMM problem is compounded because things change over time.

What might have been true 6 months ago about marketing performance might no longer be true today. Thus, the problem of model validation in MMM is a problem that not only needs to be solved once but actually a problem that needs to be continually addressed over and over again.

## Common MMM Challenges: over-fitting.

MMMs are powerful models and that means they’re subject to what modelers call “over-fitting”. The idea is that you can build a model that fits really, really well to the data that the model is trained on, but that hasn’t found the actual underlying causal relationships in the data.

Over-fitting happens when the model is “too powerful” and fits to noise in the data, instead of the signal. And if your model is too overfit, it will be fit only to noise and will miss out on the signal entirely.

Normal methods for evaluating “model fit” are often the cause of over-fitting.

If you look at your in-sample R-squared, MAPE, or RMSE metrics, these metrics will go up as you add more variables and features to your model.

Unfortunately, these metrics are leading you astray because you’re just overfitting your model to the data. These metrics — MAPE, RMSE, and R-Squared — will all look amazing, but the model will not have found the true underlying relationships in the data, and instead will just be perfectly matched to the random noise in the data we happen to be looking at.

What that means in practice is that the results we get from the model will be ** wrong**. They will not be driven by the true causal signal in the data, and instead just by noise.

Then, when we go to actually ** use** the model to make budget changes, we could end up costing our business millions of dollars.

If you ask an MMM vendor about model validation and they tell you about these metrics, that’s a pretty big red flag.

## Common MMM Challenges: model misspecification.

Another related problem is model misspecification.

Many MMM tutorials (and even models built by expensive consulting firms) use a standard linear regression to fit the model.

Linear regression is a great and powerful tool for doing statistical modeling but it ** implicitly** makes a number of assumptions about how marketing works.

If the way marketing actually works doesn’t match the assumptions of your model, then your model is **misspecified**. That is, the specification of your model doesn’t match the real world.

There are many good examples of this:

Most marketers believe that the marketing effectiveness of a channel can change over time depending on things like creative, targeting strategy, or competitor activity, or global pandemics. But then, they use a modeling framework that assumes that marketing performance is fixed over time (e.g., a standard linear regression)!

Similarly, most marketers believe that marketing effectiveness is influenced by seasonality (it’s easier to sell sunscreen in the summer than in the winter) but then, the modeler will make the assumption that those two factors are totally independent – which can lead you to get exactly the wrong answer when you run your model!

Both over-fitting and model misspecification are **big** problems, but it’s not always easy to tell if they’re problems for **your** model. The popular open-source packages don’t make it easy to check for these issues and neither do out-of-the-box model-fit statistics you might have learned about in your college statistics classes.

So what can we do about it?

## How to validate your MMM:

**This is where validation comes in**. Validation will help us detect problems with over-fitting, model misspecification, and other common modeling problems.

In order to get model validation right, we’ll need to approach the problem from multiple angles. Remember: the truth is unknown and unknowable, so we need to validate our inferences with multiple strategies in order to hone in on the truth.

The most important methods for model validation are:

- holdout forecast accuracy and backtesting
- parameter recovery
- stability / robustness checks
- lift tests and experimentation
- dynamic spend deployment and forecast reconciliation

These methods help us to validate our MMM model and are critical techniques every modeler should know.

We’ll dedicate a future article to each of these validation methods so that you can apply them to your own modeling practice.

Our next article will be on holdout accuracy and backtesting so that you can forecast with confidence. You can read it here.