Introducing Media Mix Modeling (MMM) into your organization is a decision that requires careful consideration and due diligence. MMM is hard, and it’s worse to do it badly than to not do it at all.
If you’re starting to consider how to do MMM, you might be thinking about whether you should do it in-house or outsource to a vendor. Either way – whether you’re interviewing vendors (like Recast) or talking to your internal marketing science team if you have one – you need to ask the right questions to help you select the best path forward.
Yes, of course, we have a horse in the race – we are one of the vendors that you might be considering (and, admittedly, we think we’re the best MMM platform for consumer brands out there).
However, we firmly believe that these questions should be asked of all vendors, including us. Our aim is to empower you with the knowledge to make an informed choice – whichever one you think is best.
With this in mind, let’s explore the key questions that will help you identify the right MMM partner for your organization.
Question: How can we validate the model’s results?
What you should look for in the answer: A good answer will discuss validating results experimentally with data from sources outside the model. This might include geographic-based lift tests, in-platform lift tests, backtesting/holdout forecast accuracy, and other types of spend-manipulation experiments. The very best vendors understand that MMM works best within a continuous optimization and improvement framework.
Answers relying solely on modeled fit statistics, like in-sample R-squared or MAPE, are generally subpar. Any discussion of “significance” is usually a negative indicator.
Question: Which parameters in the model are estimated versus those selected by an analyst?
This question aims to understand what is assumed by the analyst/modelers versus what results are truly data-driven. In some modeling frameworks, analysts choose certain parameters (like an ad-stock rate) before running a regression model. In such cases, the ad-stock rates are based on the analyst’s judgment, not your data. Thus, it’s important to understand which values are chosen/assumed and which are actually fit to your data.
What you should look for in an answer:
- Clear distinction between what’s modeled and what’s not.
- Generally, the fewer “chosen” parameters, the better.
Question: For parameters set by an analyst, what is the process for setting those parameters?
With this question, we aim to understand how much the analyst’s opinion and decisions shape the model’s results. If an analyst sets many values in the model based on their judgment (and not the data), then the model’s results may be highly sensitive to those specific assumptions.
What you should look for in an answer:
- A description of a rigorous, predetermined process for setting these parameters (not the guess-and-check method!).
- Discussion of robustness analyses to demonstrate how results change with different inputs.
Question: Can I easily obtain confidence/credible intervals for all parameters estimated in the model?
Here, we seek to understand how the vendor approaches uncertainty. In any statistical model, uncertainty is inherent, and for effective decision-making, it’s critical to understand this. Terms like “statistical significance” are not particularly helpful, as they can obscure substantial underlying uncertainty.
An ideal answer should clearly express the range of plausible values from the model in reports, allowing us to understand best- and worst-case scenarios. All forecasts should come with an uncertainty range.
Question: Are we able to see true holdout predictions based on the model? Can we monitor those over time?
Holdout forecasts are vital for validating a model’s prediction accuracy on unseen data. A common issue with complex models like MMMs is overfitting, and holdout forecast accuracy helps us assess if the model is overfitted or not. An MMM that consistently predicts future events using unseen data indicates it has identified true underlying causal signals and is not merely overfit.
You should expect the vendor to consistently and transparently provide holdout prediction results, updating them automatically over time.
Additionally, ask about their methods to prevent information leakage in holdout predictions. For instance, including data on website traffic or branded search activity in holdout predictions could be misleading, as these are influenced by marketing activities.
Question: What is the variable selection process or algorithm you apply?
Variable selection involves choosing which variables to include as “controls” in the model. The results of any statistical model are incredibly sensitive to chosen variables, making it imperative that your MMM vendor has a robust process for variable selection.
What you should look for in an answer:
- A discussion of appropriate versus inappropriate controls (not all control variables are beneficial).
- An exploration of causal diagrams and the trade-offs regarding the values being estimated.
Red flags in an answer:
- The use of any automated variable selection process, including techniques like LASSO and Ridge regression, is concerning for MMM and should be viewed with skepticism.
- Choosing variables based on what’s “statistically significant” is a major red flag and indicates poor modeling practice.
Question: Is there any pre- or post-processing happening in the pipeline that’s influencing our model results?
Statistical models don’t always yield the desired results. However, transparency about the methods and outcomes is crucial, rather than concealing or altering true results to fit a preferred narrative. Unfortunately, in the MMM industry, there are instances where analysts manipulate model results in post-processing to align with their or the client’s expectations, a practice known as statistical malpractice.
A particularly concerning practice is “freezing coefficients,” where analysts keep outputs consistent with previous results, even when input data change. This is a very bad statistical practice and is intentionally misleading.
Ensure your MMM vendor does not engage in such practices and shares the outcomes of the statistical model (both positive and negative) transparently.
Question: How can we incorporate incrementality tests into our MMM?
Incrementality tests, also known as “lift tests,” are crucial for measuring the true causal impact of marketing channels at specific moments. These tests, being experimental, often provide a more accurate incrementality reading with fewer assumptions than MMM alone.
Effective MMMs should seamlessly integrate the results of various lift tests over time. It’s advisable to ask follow-up questions about discrepancies if you conduct similar lift tests in the same channel after a period and receive different results. The vendor should be able to assimilate this information, reflecting changes in channel performance over time.
Question: What happens if we don’t trust the results?
MMM results may differ from other marketing effectiveness measures you have, and that’s beneficial. If the results mirrored existing data, MMM would be redundant. However, results that significantly conflict with your intuition or other data sources can be challenging to accept.
In response to this question, look for a vendor who advocates a “test and learn” approach to validate the model’s outcomes. This might include verifying model results through a lift test, geo-holdout test, or designing an experiment to increase spend in a marketing channel temporarily to observe corresponding business KPI movements.
A good MMM vendor should aim to demonstrate the model’s efficacy through solid business results.
Additional Questions to Consider:
- Where can we find detailed documentation on the assumptions made by the model?
- How does the model handle channels like branded search and affiliates?
- Will you provide recommendations, or do we need to interpret the MMM and devise strategies ourselves?
- Our business is highly seasonal. How does the model account for seasonality?
- Our business experiences a lot of random variation (e.g., weather). Can the model still be effective for us?
- What kind of data do you use to build the model?
- What types of questions is your MMM not designed to answer?
- Does your model integrate data from platforms like Meta and Google to enhance its accuracy?
TLDR:
Finding an MMM vendor is a commitment to working together through a new measurement path. It’s not an easy thing to do – but it can be truly transformative and help you get rid of inefficient marketing spend,
Please, ask these critical questions to every potential vendor, Recast included:
- How can we validate the model’s results?
- Which parameters in the model are estimated versus those selected by an analyst?
- For parameters set by an analyst, what is the process for setting those parameters?
- Can I easily obtain confidence/credible intervals for all parameters estimated in the model?
- Are we able to see true holdout predictions based on the model? Can we monitor those over time?
- What is the variable selection process or algorithm you apply?
- Is there any pre- or post-processing happening in the pipeline that’s influencing our model results?
- How can we incorporate incrementality tests into our MMM?
- What happens if we don’t trust the results?
Want to put us to the test? Let’s chat.