The marketing measurement space has seen a surge in open-source marketing mix modeling (MMM) tools like Google’s Meridian and Meta’s Robyn.
As an MMM vendor, we love this. It has brought in more people to consider MMM as part of their measurement mix and to start thinking about incrementality as a whole.
But here’s the problem: while running an open-source MMM is easy, trusting the results is a completely different challenge.
It’s not the code that’s the problem—it’s how you validate the results.
Many teams run their data through an open-source MMM, get some output, and say, “Yeah, this looks reasonable.” But that’s not validation. That’s confirmation bias.
If you’re making multi-million dollar decisions based on an MMM, “reasonable-looking” isn’t good enough. You need rigorous validation to ensure you’re not being misled. Otherwise, you risk shiftingthe budget based on faulty insights—burning cash in the process.
The real challenge isn’t running an MMM. It’s proving that the results are actually right—and that’s where most open-source MMMs fall short.
The Core Issue: Validation Gaps in Open-Source MMM Tools
Open-source MMMs like Robyn and Meridian have absolutely been a step forward when it comes to accessibility, but they often lack robust, built-in validation methods. And if you’re not validating your MMM results, you are… well guessing.
Let’s be clear: just because a model produces results, those results aren’t necessarily correct.
MMMs rely on a variety of assumptions –about how marketing impacts sales, how channels interact, how diminishing returns work, etc. If those assumptions are wrong (or even slightly off), your results will be misleading.
This is precisely why validation isn’t optional—it’s essential. Based on our experience, open-source MMMs particularly struggle with two critical validation methods:
1. Parameter Recovery Exercises
One of the simplest ways to test an MMM is to see if it can correctly recover known parameters from synthetic data.
How it works:
- You create a dataset where you already know the true ROI for each marketing channel.
- You run this dataset through the model and check: Does the MMM correctly estimate the known parameters?
- If it doesn’t, then the model is fundamentally flawed.
Let’s say you create a synthetic dataset where you know that paid search has a true ROI of 3x, and TV has a true ROI of 1.5x. If your MMM tells you that TV is actually more effective than paid search, that’s a huge problem.
If it struggles to correctly estimate ROI on a dataset where we already know the answer, why would we trust it on real-world data where we don’t?
But here’s the issue: most open-source MMM tools don’t include parameter recovery as a standard test… because many of these tools would likely fail this test.
2. Out-of-Sample Forecast Accuracy
Another essential validation method is testing whether an MMM can accurately predict future results.
How it works:
- You hold out a portion of your historical data (say, the last 3 months of marketing spend and revenue).
- You run the MMM on only the earlier data, asking it to predict what should have happened in those missing 3 months.
- Then, you compare the MMM’s forecast to what actually happened.
Let’s say your company spent $1M on Facebook Ads last quarter and saw $3M in incremental revenue.
If your MMM correctly predicts $3M in revenue for that period (without seeing the data!), that’s a good sign. But if it predicts only $1.5M or inflates it to $6M, that’s a major red flag.
Again, many open-source MMMs don’t include proper forecast validation steps, or they struggle with it. Typically, some MMMs overfit historical data, ignore seasonality effects, and/or are too sensitive to noise.
Other Common Pitfalls in Open-Source MMMs
Even beyond these two core validation methods, there are other red flags to watch for in open-source MMMs:
1. Branded Search Bias
- If you’ve ever run an MMM and seen branded search outperforming every other channel, you’re not alone. Many models over-attribute revenue to branded search because it’s at the bottom of the funnel – but that doesn’t mean it’s actually driving incremental conversions.
2. Lack of Cross-Channel Interactions
- Most MMMs assume each channel works independently, but that’s not how marketing works. For example: TV ads drive branded search queries. If your MMM ignores that connection, it will underestimate TV’s impact and over-credit branded search.
3. Diminishing Returns Are Hard to Model Correctly
- Most MMMs assume that marketing returns follow a predictable curve – every marketer knows that’s just not true. In reality, the relationship is often nonlinear and changes by channel. A simple exponential decay function won’t be enough to capture real-world effects.
The Real Challenge: Turning Model Results into Trusted Marketing Decisions
When we started Recast, we underestimated just how difficult it is to do MMM well.
At first, we thought building a solid statistical model would be enough. But we quickly realized that wasn’t the hard part – validation was.
We’ve seen many companies go through the same journey. They start with open-source tools like Robyn or Meridian, and they’re excited by how easy it is to get a model up and running.
But over time, they run into roadblocks: how do you know the results are right? How do you communicate them effectively? How do you ensure decision-makers actually use them?
That’s where things get complicated.
Many of our customers came to us after trying open-source MMMs and realizing they didn’t just need results –they needed confidence in those results.
Building an MMM is easy. Anyone can do it. Making an MMM that is trustworthy, interpretable, and actually useful for making real-world budget decisions is much harder.