How to reduce bias in your marketing mix modeling (MMM) with better parameter settings

Marketing Mix Models (MMM) are very complex, and the choice of the parameters your data scientists choose can really influence the results you will get.

If the model allows for your analyst to guess, check, and set parameters until things look right to them, there’s a high chance the end results might be biased. Yes, you will get the results of one model, but you’re not actually seeing the hundreds of other models the analyst tried.

You might see, for example, that TV is a really good channel for your company on the model you’re observing. But what happened to TV in all the other models? Did TV look like a not incremental channel in the rest of models?

So, as a marketer, you should be looking for clarity from the modeler about exactly what’s being set by an analyst versus what’s not. 

When you’re working with whoever is running your MMM (whether an internal or external team), you want to understand what parameters in the model are being selected by a human based on their intuition, their knowledge, or outside data, versus what parameters in the model are actually learned from the data and are tailored to the business.

You need to be able to see the actual variation in terms of which model was selected and, very importantly, why. What you’re looking for in this answer is a discussion about a rigorous predetermined process for setting these parameters. 

Transparency in Parameter Selection and Analysis

It should NOT be a guess-and-check process. There needs to be a systematic, formulaic process for setting them consistently and without bias. 

You also want to discuss how your analyst is doing robustness analyses. You need to be able to see how the results change when different inputs are varied. 

For example, if they are going to set an ad stock parameter or a specific saturation curve, how different will the results be depending on the different versions that are used? 

You should be able to see all the versions of the model with high, medium, and low settings for these different parameters. That’s how you can get a sense of the range of output that comes out of the model given different assumptions. 

Analyst intervention is a real problem because they have a lot of power to shape different results from the same raw data. If you conduct a sufficient variety of analyses, you will eventually find something that is statistically significant, which you can then present to the client

In MMM, the models are very, very flexible. You have many different choices of variables that you can include or not include – there is no shortage of decisions that an analyst if given enough power, can make one way or another. 

But you don’t want to base your MMM on what model the analyst likes best or is more incentivized to share with their client or team if doing it internally – without considering all the other models that are inconsistent with the one selected.  

Bias within your MMM can not only make you waste your budget on channels that are not incremental but, what’s worse, it can give you a rationale and a justifiable story of why you’re doing that.

We say this half-jokingly, but there are a lot of CMOs who justify why they should have a really big TV budget because they want to go to Cannes and win some awards – and you “can” make your MMM tell that story given enough analyst intervention. 

Reducing Bias In Your MMM: The Recast Approach

Our belief at Recast is that analysts should not intervene at all and you should have pre-registered your methodology and chosen parameters. You should know exactly what decisions are made upfront given any circumstances and get the results at the end of the process without having a human make any decision after they’ve seen the data. 

You can do this in a Bayesian framework where, effectively, you pull all of those decisions and assumptions into a single statistical model, which allows the analysts to step back. That’s the best way we’ve found to keep biases away as much as possible.

Any time you have a human intervening you are injecting potential bias into the whole process. Instead, you want to understand the true uncertainty of the different results that you have – in the most scientific and unbiased way possible. 

There are, of course, cases where that’s impossible. Say that you got the results from the model and you realize that it isn’t specified correctly. You then might have to go in and make some changes in order to fix that. 

But our goal at Recast is to reduce that as much as possible so that we don’t accidentally interject any more bias than we possibly have to. 

That’s why it’s so important to know what goals you are setting for the team – and that has to be out-of-sample forecast accuracy. It’s very difficult to game and helps us to reduce the bias when an analyst has to intervene. 

We think about this a lot here at Recast: we have our clients’ best interests in mind and our product is very signed with that, but it’s very easy to inject bias even if you don’t want to. But we don’t want to sell a cover-your-ass product or bias the model into telling the story our clients want to hear. 

We want to give you the actual information that will drive your business forward, so we have a lot of structured rules in place that prevent an analyst from biasing the results.

About The Author