How Recast Is Building the World’s Most Rigorous MMM Platform

At Recast, we set out with an ambitious goal: to build the world’s most rigorous Marketing Mix Modeling (MMM) platform. This mission influences everything we do, from the way we allocate resources internally to how we obsess over model validation. 

At its core, our approach is about scientific rigor, transparency, and relentless pursuit of the truth in marketing data. But why focus on rigor? Why is this so important to us and to our customers?

Why Rigor Matters in MMM

It’s trivially easy to run an MMM. Any data scientist can plug in marketing data, apply some basic statistical methods, and produce a model that yields results. You can even do it in Excel, and it should take just a few minutes. However, running a good MMM and proving that it’s good? That’s the hard part.

The real challenge is not just generating results but determining whether those results are correct and actionable. MMM is about causal inference, not simple prediction. The goal is to identify which marketing channels actually cause increases in revenue, not just which ones are correlated with it. There are infinite models that can fit the data equally well, but most of them will be wrong.

Without a rigorous approach to testing and validation, brands risk wasting massive amounts of money by reallocating their marketing budget based on misleading results. That’s why Recast focuses on building the world’s most rigorous MMM platform—because getting the model right is the only thing that actually matters.

A Culture of Transparency, Skepticism and Continuous Improvement

From the beginning, we wanted to build a platform that wasn’t just about running models but about doing so in a way that aligned with the best scientific practices. 

Both founders, Michael Kaminsky and Tom, are trained statisticians who care deeply about using these methods correctly. Our team includes PhD-level researchers and data scientists who come from academic and scientific backgrounds. 

We’ve built a culture that prioritizes transparency, continuous improvement, and skepticism. Everyone we hire is trained to poke holes in our models and methods. We want people who question, ‘Could this be wrong?’ and use that to improve our models.

This emphasis on skepticism is also why we openly publish our documentation. We love – need – feedback and criticism from the broader community so we can continuously refine and improve our platform.

Tools for Ensuring Accuracy and Reliability

A key part of building the most rigorous MMM platform is having the right tools to test and validate the model. At Recast, we’ve developed a range of tools and checks to make sure that our models are driving results that businesses can trust.

1. Parameter Recovery Checks

In parameter recovery, we generate data with known causal relationships and test whether the model can accurately recover those relationships. If a model can’t replicate the known relationships in a controlled environment, how can you trust it to identify causal relationships in real-world data? 

2. Robustness Checks

Many statistical models are fragile. Slight changes in the data can lead to vastly different results – which shows us that the model is not robust. Robustness checks help us make sure that minor tweaks to the data or assumptions don’t drastically alter the results.

A robust model should produce stable results even when the data or assumptions are slightly adjusted – and if it doesn’t it’s a sign that something is wrong.

3. Out-of-Sample Predictive Accuracy

A good MMM should not just fit the historical data—it should be able to predict future outcomes. To test this, we run out-of-sample predictive accuracy checks, where we hold out a portion of the data during model training and then ask the model to predict what happens in the holdout period. We don’t just want the model to work in the past. We want it to predict the future. 

4. Data Quality Checks

Even the best model can’t perform well if the data is flawed. That’s why we’ve built a fully automated data quality checking pipeline to catch errors before they reach the model. From missing data points to inconsistencies in data formats, our system is designed to flag issues early and prevent them from skewing the results.

Conclusion

In a world where it’s easy to run an MMM but hard to run a good one, we’re focused on building the most rigorous, reliable, and scientifically sound platform for marketing mix modeling.

By incorporating parameter recovery checks, robustness testing, out-of-sample predictive accuracy, and automated data quality pipelines, our goal is to help businesses allocate their marketing budget with confidence.

About The Author