Priors are one of the least understood – and most essential – parts of a Bayesian media mix model.
In simple words, a prior is what you know (or believe) about the world before the model sees any data. In marketing, that might include common-sense expectations like: “spending money on advertising probably doesn’t reduce my sales.” Or that impulse-buy channels behave differently than high-consideration ones. In Bayesian statistics, priors are what allows us to translate those assumptions into mathematical form.
So why does this matter? Because real-world marketing data is messy. It’s collinear, noisy, and often too limited to answer every question on its own. If you feed that data into a model without priors, it will still return an answer, but it may be wildly unrealistic.
For example, without any constraints, a regression could spit out a -300% ROI for a channel that everyone agrees has value. Obviously impossible, but you wouldn’t believe how often we see that.
And that’s where priors come in. They don’t (shouldn’t) override the data, but they provide guardrails to help the model produce reasonable results. This doesn’t mean introducing bias or cherry-picking results, but setting up the constraints in a way that represents your knowledge about the world.
The Spectrum: From Uninformative to Informative Priors (And Why Neither Is “Best”)
Marketers often hear the terms “informative” and “uninformative” priors thrown around in MMM conversations. The truth is that there’s no actual hard definition between what is an uninformative set of priors or an informative set of priors. Instead, think of them as two ends of a spectrum.

A truly uninformative prior means the model starts with almost no assumptions, and anything could happen. This can lead to absurd results. Imagine you were building a model to understand humans’ height, and you set it so people could be anywhere from -20 to 300 feet tall – that would be an “uninformative” prior – technically correct but clearly not useful.
On the other side of the spectrum, you have informative priors that impose stronger assumptions. The challenge here is not getting so specific that you end up putting your thumb on the scale.
In practice, most MMMs land somewhere in the middle. You want priors that reflect what’s plausible, but not ones that bias your results. What matters most isn’t how “informative” your priors are in the abstract. It’s whether they’re justifiable and whether the people who will be using your results would agree with them.
How Recast Builds Priors: Guardrails, Not Guarantees
At Recast, we use them to keep the model grounded in reality.
The first step is what’s called a prior predictive check. Before we fit the model to any data, we simulate what the priors alone would imply. If those simulations suggest sales could range from -$20,000 to $800 billion for a mid-sized brand, we know something’s off. What we’re looking for here is that the range of possible sales, according to just the simulation that we’re doing, is within an order of magnitude or so of the actual data.
The goal is to avoid obviously implausible outcomes, not to force a specific answer. We’re not baking in a belief that Meta always works or that TV always fails. All we’re doing here is just constraining the model at a high level.
Here’s how we actually set priors:
- Base demand: For established brands, it’s higher. For challengers, it’s lower and more reliant on paid spend.
- ROI/CPA priors: These must align with base demand and total sales, but leave room for variation between channels.
- Time-shift curves: Different for each channel. TV behaves differently from paid social or direct mail.
- Lift test anchoring: If a client says “Facebook ROI was 3.5–4.5x in September,” we can constrain the model to reflect that external evidence.
The Real Danger: Cooking the Model to Get the “Right” Result
To be clear, it is absolutely possible to bias a Bayesian model using priors. If you tell the model that Facebook ROI is probably 7x, then – surprise – it’s going to pull that way.
And to be honest, the more common issue we see isn’t really manipulation on bad faith. It’s overconfidence.
Analysts and vendors often impose priors that reflect what they hope to see / think will see, not what the data supports. That might mean biasing all estimates toward positive ROI to make performance look better, or using tight priors to “smooth out” noisy channels and hide uncertainty.
That’s why we encourage a simple test: if you showed your priors to a skeptical CFO, would they agree with them? If not, it’s a red flag. Seriously, go show your finance team – this kind of framing builds credibility with them.
In our experience, CFOs care more about whether your priors are reasonable than whether your intervals are narrow. They want to know the model is playing fair and if your forecast range shifts because of a more conservative prior, that’s fine. What matters is that they understand why.
And when in doubt, run a sensitivity check. Use a tighter prior, then a looser one. What changes? What doesn’t? If the conclusions fall apart under slightly different assumptions, they probably weren’t that solid to begin with.
But again, great priors are defensible and transparent.
TLDR:
- Priors are necessary assumptions in Bayesian MMM that keep models realistic – but they must be carefully chosen to avoid bias.
- “Informative” and “uninformative” priors exist on a spectrum and should reflect plausible, defensible assumptions.
- At Recast, we use prior predictive checks to constrain models without favoring any channel or result, ensuring guardrails without distortion.
- Communicating ranges and priors transparently builds trust with finance because clarity is what actually earns you budget.


