What Everyone Gets Wrong About AI in Media Mix Model (MMM)

Everyone’s talking about “AI-based” media mix modeling. There’s a lot of hype in the space – but what’s real and what’s not?

From our experience, there’s a fundamental mismatch between what large language models (LLMs) do and what MMMs require. 

LLMs like ChatGPT are optimized for one thing: predicting the next word in a sequence. MMMs, by contrast, estimate causal relationships over time. They’re solving completely different mathematical problems, and expecting one to perform the other’s job is a category error.

LLMs don’t learn from data in the way that statistical models do. When you use an LLM, the weights of the neural network do not change. They are just being pushed through the network to generate tokens. In contrast, an MMM must learn from data by estimating model parameters, and this requires a fundamentally different process.

We’re not saying that LLMs can’t do MMM. We’re saying LLMs can’t do MMM well right now. You’d have to program into the LLM the ability to learn parameters from data and have a training step – and that’s not something LLMs are built to do yet.

Speed Isn’t the Bottleneck Anymore – Validation Is

One of the problems around AI MMM models is the idea that the bottleneck – that what we care about in media mix modeling – is speed. AI can absolutely make media mix modeling faster. 

But building an MMM was never the hard part. A decent analyst can run thousands of models in a couple of hours. So maybe AI gets you from two hours to one hour and fifteen minutes. That’s just not what actually matters. Running the model isn’t the challenge. 

Knowing whether the results are correctthat’s the hard part.

Media mix modeling is one of those rare domains where generating results is easy, but validating them is extremely difficult. This is why improvements in speed, iteration, and model generation miss the real problem most marketers face.  In fact, the ease of generating results with AI can make it even more dangerous because it’s easier to create the illusion of precision without any actual reliability behind it.

Again, if you’re a senior marketer looking to use MMM to guide next quarter’s budget, you don’t care how fast your team ran the model. You care whether the forecast will hold up when spend shifts, channels change, or reality intervenes.

The bottleneck is the trust in the output. So how can you build that trust?

Where AI Can Help: The Services Layer

To be clear, we’re not dogmatic about it. Just because it isn’t doing the hard statistical modeling now, that doesn’t mean it’s useless. We’re seeing a growing role for AI in the services layer that surrounds the MMM engine.

Think of tasks like summarizing results, translating outputs into clear recommendations, or generating “what-if” forecasts at scale. These are exactly the kinds of problems that LLMs are well-suited to solve. 

For example, the model may tell you that YouTube drove 40% of incremental sales last quarter. But AI can help frame that insight into a recommendation for your CFO: “consider increasing YouTube spend by 10% to test for further scale.” It’s not doing causal inference, just helping you act on it.

AI also shines in operational automation. Need to run a report that shows spend vs. return by channel over the last 12 months? Or simulate what happens if you shift 15% of the budget from TV to Meta? These are repetitive, structured tasks that LLMs can make easier and faster.

But even here, human oversight matters. AI doesn’t know what actually happened in your business. It wasn’t in the meeting when the product launch got delayed or when the sales team changed their comp plan – it’s lacking context. So don’t take your hands off the wheel just yet.  

The future of media mix modeling is hybrid. The Recast core model is built on a fully Bayesian statistical framework, estimated using Hamiltonian Monte Carlo (HMC). There is a very real sense in which this algorithm is doing “machine learning” or “AI”– but you won’t find “AI” plastered across Recast’s marketing. 

If you’re evaluating MMM tools or teams, ask the hard questions: what part of the process does the AI actually control? Who’s validating the outputs? What happens when the model is wrong? 

TL;DR 

  • Large language models don’t perform causal inference and can’t replace the core modeling that underpins robust media mix models.
  • AI can rapidly build and iterate MMMs, but this speed doesn’t address the hardest part: validating whether the results are actually correct.
  • The real bottleneck in MMM is validation and interpretation.
  • AI supporting but not replacing human expertise is currently the most viable path for a trustworthy MMM.
Scroll to Top