Why Every Marketing Mix Model Should Be Treated with Skepticism (Including Recast’s)

Our customers trust us to make $100M budget allocation decisions. We take that very seriously. Decisions will be made. Money will be won or lost.

And we cannot guarantee that we’ll always be right. That is simply not possible in our world. No MMM can – and if they say they do, they’re lying.

But we will always do everything in our power to help our customers put their money into the best possible bet that can be made.

We like to say we’re “default skeptical” at Recast and often tell people they should assume every MMM they run into (including ours!) is wrong.

And yes, we know this might seem odd for an MMM vendor… but we think this culture of skepticism is critical to building a truly rigorous platform.

This has become embedded in everything we do – our product, our processes, who we hire, who we partner with… 

Here’s what that looks like:

How Recast Embeds Skepticism, Rigor, and Transparency

We’re skeptic-first

Running data through an MMM is trivially easy. The hard part is building a comprehensive platform that validates every aspect of its underlying model.

So, every MMM should be approached with skepticism—including ours. It’s what drives us to question assumptions, validate results, and ensure that our models hold up to scrutiny.

We’ve built a team that asks the hard questions:

  • Could this assumption be wrong?
  • How might the data mislead the model?
  • What happens if X scenario occurs?

This core value is our safeguard and drives us to question assumptions, validate results, and ensure that our models hold up to scrutiny.

It’s the difference between trusting your model and blindly relying on it. At Recast, we will never ask for blind faith. Instead, we will work to earn our clients’ trust by making our models as transparent, accurate, and reliable as possible.

We focus on validation.

We’ve had people ask us why our configuration and validation process takes 4-6 weeks.

It’s because we’re obsessed with validating the model rigorously – and that takes time. But it’s also what matters most.

Here’s how we validate our models:

  • Backtesting: Can the model predict outcomes in scenarios it hasn’t seen before? If it only fits historical data, it’s not good enough.
  • Alignment with experiments: Does the model match lift tests, geo-holdouts, or other experiments? If discrepancies arise, we investigate and refine until the results align.
  • Iterative refinement: When something looks off, we don’t ignore it. Instead, we ask:
    • Is there missing context, like pricing changes or shifts in seasonality?
    • Are we capturing saturation effects or channel constraints correctly?
    • Does the model structure itself need adjustment?

Being skeptical comes down to doing things the right way. It comes down to running check after check after check. And not taking shortcuts. 

We put our money where our mouth is

At Recast, we share everything:

  • Model documentation: Our technical documentation is publicly available. You can find it here.
  • Results: All model outputs, including backtests, are published directly in our platform for clients to review.
  • Discrepancies: If our model doesn’t align with lift tests or experiments, we don’t hide it. Instead, we investigate and iterate until the issue is resolved.

The model needs to be inspected and approached with doubt. Stress-tested. Consistently.

Closing Thoughts

We’re statisticians and we’re building a (very complex) statistical model. We obsess over this. And the only way we can trust the models we build with our clients is by embracing this culture of skepticism that goes above and beyond industry standards. 

And that’s why we’re so proud of what we’re building here at Recast.

About The Author