What Are Priors in MMM – And Why They’re Difficult to Get Right (But You Need To)

How do you make sure your model doesn’t spit out negative ROIs on channels that are clearly working? Or attribute three times more revenue than your business actually drove? Or collapse when two correlated channels move together?

These are the kinds of problems priors are meant to solve.

They’re one of the most powerful parts of Bayesian modeling — and one of the easiest to not get right. They’re also where your marketing intuition meets the math, which makes both an art and a science.

In this article, we’ll share how we approach priors inside Recast: what makes them tricky, how we structure them, and why they matter more than most marketers realize.

What Are Priors in Marketing Mix Modeling?

In a Bayesian media mix model, priors are simply what you believe before you ever look at the data. Think of them as expressing real-world knowledge — like “it’s very unlikely that my sales go down when I spend more on advertising” — in mathematical language.

Here’s why that matters:

  1. Grounds your model in marketing reality. Without constraints, a model can produce negative ROI or absurd 1000× coefficients — just because the data alone is noisy or channels overlap. Priors act like guardrails and keep results within plausible bounds.
  2. Helps when data is weak like when multiple channels are highly correlated. A pure data-driven approach can crash, but priors should guide the model toward realistic estimates.
  3. Makes assumptions explicit. Unlike frequentist models that hide assumptions, Bayesian priors force you to say: “”ROI is probably positive, maybe between 0–10×””
    so the model doesn’t wander into fantasy territory. 

To be clear, priors don’t override your data; they just make sure it’s interpreted correctly.

But while the idea is simple, actually setting good priors — ones that are both plausible and useful — is where things get hard. 

Why Setting Priors in MMM Is So Difficult

In theory, priors are just your beliefs about the world before seeing the data. In practice, getting them right is not so easy. Here’s why:

Challenge 1: Multicollinearity Makes Priors Hard to Specify

Marketing data is noisy and highly correlated — especially in large media budgets. Channels like Facebook and Google often scale together, which makes it difficult to tease apart who’s actually driving impact.

That’s a problem when setting priors.

If the data can’t distinguish signal from noise, your prior beliefs end up carrying more weight. But now you’re not just asking, “What do I believe Facebook’s ROI is?” — you’re being forced to guess what portion of total impact should go to Facebook vs. Google, even though they moved in tandem.

Multicollinearity makes those beliefs harder to specify — and riskier to get wrong.

Set the prior too strong? You might override meaningful signal. Set it too weak? The model might collapse into nonsense (like 1000x ROI for one channel and –999x for another). The key is not just setting a “realistic” range, but understanding where the data needs more guidance — and where it doesn’t.

Challenge 2: Marketing Environments Change Faster Than Your Priors Do

The marketing environment is highly dynamic. Algorithms shift, creatives rotate, competitors respond. 

There’s a real half-life on the results from the experiments that you run. 

Priors that assume stationarity — unchanging baseline performance, fixed channel lag effects — can rapidly become outdated. The model may still converge, but the inferences it yields won’t reflect present-day reality.

Challenge 3: Intuition and Math Don’t Always Align

A common modeling failure is internal inconsistency. Say you believe 50% of revenue is organic and that every channel has a 5x ROI. That might feel directionally right — but it implies revenue triple what actually occurred. 

Priors must be coherent in aggregate. When intuition is vague or contradictory, the model will flag those conflicts, but only if priors are carefully constructed and stress-tested.

Challenge 4: You Can’t Avoid Assumptions, But You Can Make Them Explicit

All models rely on assumptions. The Bayesian framework makes them transparent through priors. 

But the challenge is to be intentional: uninformative priors may seem “safe,” but they still encode assumptions — and biased ones can distort results without warning. Good modeling means documenting and validating every prior before touching the data.

How Recast Builds Priors That Reflect Marketing Reality

Setting priors is very important to bridge marketer intuition and model accuracy. At Recast, we treat them as a core part of model design, which helps us ground them in both statistical rigor and marketing reality:

Step 1: Start with Structured Discovery to Surface Business Beliefs

Every model starts with structured discovery and we don’t expect brands to know their exact incrementality numbers — that’s why they’re working with us. But even directional beliefs can be good guardrails.

We ask:

  • How much of your business is organic?
    This isn’t a fixed number — it’s a belief. For some DTC brands, it might be 20–30%. For legacy brands with strong word-of-mouth, it might be 70%+.
  • How long is your purchase journey?
    Timing shapes attribution. A mattress brand has a very different lag than a food delivery app. We use this to build realistic decay curves.
  • What role does each channel play?
    We ask how teams think about channels — not just spend. Is YouTube driving awareness? Is paid search just picking up branded queries?
  • What else impacts performance?
    We look for macro and micro factors — Black Friday spikes, pricing tests, iOS changes. These need to be modeled or explicitly excluded.

The goal is not precision. It’s to codify how the business actually operates — so we don’t let the data override common sense.

Step 2: Encode Those Beliefs into Mathematical Priors

Next, we encode those beliefs into mathematically useful priors.

  • Base demand:
    This acts like a floor. For mature brands, we assume a higher baseline level of sales. For newer brands, we assume more volatility and dependence on paid media.
  • Channel ROI/CPA:
    Priors help keep ROI estimates plausible, especially when there’s multicollinearity. If Facebook and Google always move together, priors stop the model from assigning absurd results (like 1000x for one, -999x for the other).
  • Time-shift curves:
    Channels don’t work on the same timelines. Display might take weeks to convert. Paid search is immediate. Each is assigned a lag profile based on real-world behavior and the brand’s experience.
  • Lift test anchoring:
    If clients have run clean lift tests, we use them. We can pin a channel’s ROI to that range (e.g., 3.5–4.5x) so the model doesn’t override it with noisy observational data.

It’s important to note that priors aren’t fixed. But they start as explicit, directional beliefs from people who know the brand best.

Step 3: Run Simulations to Validate and Sanity Check Priors

Before we train any model, we validate that our priors even make sense.

  • Prior predictive checks:
    We simulate data using only the priors. If the model predicts $5M in revenue for a brand that makes $50M, we know something’s off. This catch-before-you-train step helps us prevent wasted modeling cycles.
  • Parameter recovery checks:
    We then simulate “known” data, feed it through the model, and see if it recovers what we put in. This is how we verify that our model can even answer the types of questions marketers want answered.
  • Configuration compatibility:
    Finally, we gut-check: do these priors fit the reality of the business? If a brand says their base is 80% organic and they have a 5x paid ROI, we sanity check that those assumptions don’t imply more revenue than actually exists.

This process makes the model not just more accurate — but more useful. And it builds trust with the marketers who have to use the results.

Recap: Priors Make or Break a Good MMM Model. Here’s How to Get Them Right. 🔁 

  • Priors keep media mix modeling outputs grounded in marketing reality
  • But they’re difficult to set, especially with multicollinearity and shifting environments
  • Many MMM failures stem from poor or unvalidated prior assumptions
  • At Recast, we follow a 3-step process: elicit, encode, validate
  • The result: models that reflect the real world, not just noisy data

About The Author