How To Do Stability & Robustness Checks for Marketing Mix Models (MMM)

We hate when people say that statistical modeling is more art than science. It’s generally what someone says right before they are about to do some very very poor science. 

A good marketing mix model should be all science: as modelers, it’s our job to deeply understand and communicate the limitations of our statistical modeling efforts. Doing good science is hard, but if we do it well we have a shot at learning fundamental truths about our business. But if we do it poorly, we are likely to lead ourselves and our business astray.

Good science is often shown when it’s time to validate your model and check if you can trust it or not. That’s when misaligned incentives can bias your model. 

This article will cover two very important techniques for validating MMMs – stability and robustness checks.

In the field of statistical modeling, robustness refers to the idea that the results of the analysis don’t change when the inputs or the assumptions change. When the results that we care about are robust, it gives us evidence that our statistical model has found some underlying truth about the world and isn’t just fit to noise in the data.

So how do we check for MMM robustness in practice? 

There are three different types of robustness checks we recommend for MMM:

1 – Varying the underlying dataset being modeled by running the same model on different subsets of the data to see how the results change.

2 – Varying your model structure to see how the results are to different structures (including which variables are included in the model).

3 – Varying your modeling assumptions to see how different assumptions you make during the modeling process will impact your final results.

So, let’s talk through what each of these looks like in practice:

Varying the underlying dataset

In the first robustness check, we want to vary the underlying data that we’re feeding into the MMM. 

What if we exclude the last month of data? What if we exclude the first month of data?

In the context of MMM, we generally care most about the effectiveness estimates of each marketing channel (i.e., the incrementality of each marketing channel).

So what we want to see is that the estimates for channel effectiveness are about the same even when we remove small amounts of data (like the last one, two, three, or four weeks of data).

However, if the effectiveness estimates from the model jump around substantially then that means that our results are not robust and it often indicates that the statistical model is mis-specified.

Varying model structure

In the second robustness check, we want to see how robust our results are to changes in model structure. In particular, which variables are controlled for in the model. 

One place we’ve seen many MMM modelers go wrong is that they include lots of different “control” variables in the model, and they just randomly add or subtract control variables until they get the answer they want. This is bad practice!

As part of our robustness checks, we want to understand how our model structure impacts the results we see.

In particular, we want to know how our effectiveness results change with different subsets of control variables.

If we do or do not control for inflation, how much does our estimate for TV effectiveness change? This exercise should be repeated for all of the various control variables that an analyst is choosing to include in the model.

Varying modeling assumptions

The last robustness check involves testing how sensitive our results are to assumptions that the modeler might make. In many MMM frameworks, the modeler is choosing values like adstock rates or diminishing marginal returns curves for each channel. This means that you should test how sensitive the final results are to those different assumptions! For example, what if you use an adstock rate of 0.5 instead of 0.8 for the TV variable? How much does that change the results?

You should report out on all of these different variables and how much changes to assumptions impact the final results.

What happens if your MMM is NOT robust?

These robustness checks are very important to doing good science. If your results are not robust to these different assumptions, you shouldn’t hide it! It’s critically important that the true range of uncertainty and instability in the results is accurately communicated. 

Otherwise, the next time you go to refresh the model, the results will change (since they aren’t robust) and you’ll have to come up with an explanation as to why.

This is where it’s so important to foster a culture of transparency and continuous improvement so you and your team can openly communicate these challenges and uncertainties.

About The Author