Dynamic Spend Deployment and Forecast Checking: How to Validate your MMM Through Use

If a tree falls in a forest and no one is around to hear it, does it really make a sound? If someone runs a marketing mix model but no one actually uses the results, is it really an MMM at all?

Model validation is one of the most important parts of doing marketing mix modeling right. There are statistical ways of doing it – which we’ve discussed in the past – but the simplest way of doing it is to… well, to actually use the model. 

Believe it or not, it’s not uncommon for companies to do MMM, get some valuable insights, and be unable or unwilling to implement the recommended changes. 

The idea is straightforward: if you start using your marketing mix model to make marketing budget allocation decisions and your overall return on marketing investment starts going up, then you know that the recommendations from the MMM were good!

And likewise, the reverse is true: if you start following the recommendations from the MMM and your revenue tanks then it probably means that the MMM isn’t working.

We think that, in general, it is important to take a test-and-learn attitude to your MMM project – following the recommendations and seeing what happens is often a great way to get started!

Now, the approach we just described is a bit informal and doesn’t necessarily help us validate specific strategic decisions. Let’s get more concrete and structured in our approach to validation through use:

MMM: Validation through use (with a practical example)

In general, your marketing mix model should be able to make falsifiable predictions about what will happen if you make changes to your marketing budget. 

After all, the whole point of doing a marketing mix modeling project is to determine the best way to optimize your marketing budget. 

We should be able to take the implications of those recommendations and use them to make a series of forecasts about what we expect will happen under different budget scenarios.

Then, in order to validate the model, we can compare what actually happens with what the model predicted. 

Let’s talk through an example: 

Imagine that our marketing mix model indicates that our average channel performance is a 5x ROI but that our branded search channel only has a 0.8x ROI. 

This indicates that branded search is dramatically underperforming as a channel and that it’s not profitable for us.

Now, let’s imagine a test we can run to validate the model’s read on branded search.

We can develop a planned budget that reduces branded search spend to zero for some time period, say two weeks, while all other marketing channels continue on business as usual. Then we can make a forecast using that budget which will show our projected revenue over the time of the test accounting for the reduction in branded search spend. 

Since branded search spend isn’t estimated to be very effective, this forecast should look very similar to a forecast with branded search spend kept on.

Then we just follow the plan! We turn off branded search spend for two weeks and we see what happens to revenue. If revenue comes in line with the predictions (or above it) then the MMM has been validated: we turned off branded search spend and revenue didn’t drop, indicating that the spend was not in fact incremental.

That’s the basic idea – there’s a lot of complexity that starts to crop up once you start accounting for uncertainty and “statistical significance” but our initial recommendation is not to overthink it too much to start. 

You should be able to take this idea and use it to get “directional” reads on whether or not these sorts of spend manipulations are generally consistent with the read you’re getting from your marketing mix model.

TLDR: Dynamic spend deployment and forecast checking

So, the overall strategy can be summed up as follows:

  • If a channel is an under-performer, we should reduce spending on that channel next month.
  • If a channel is a high performer, we should increase spend on that channel next month.

Then, we just compare the actual revenue or customers acquired with holdout predictions from prior months to determine if the actuals are consistent with the forecasts. This consistency with holdout predictions helps us to validate the model!

The great part about this strategy is that it helps you to validate the model while also improving your business. You should be iterating toward more and more efficient budgets while also consistently and continuously validating the inferences from your marketing mix model.

About The Author