Let’s say you spent millions last quarter on a brand campaign. Where does that show up in your MMM?
Everyone wants a clean causal brand measurement chain: “we spent $1M on TV → awareness increased → revenue went up.”
It’s absolutely not that simple. There are 3 main challenges for measuring long-term brand effects with media mix modeling:
Challenge 1: Did the awareness bump drive more revenue?
Long-term brand effects are tough to measure – because, well, they’re long-term. You might invest in brand marketing today, but the results could take months or years to materialize. And the longer that gap, the harder it becomes to draw a straight line between spend and outcome.
Because of this, we’ve seen a lot of companies invest in brand ads and have absolutely no plan on how to measure if that campaign was effective or not. We think about it like this:
There are two types of brand effects, and they’re measured differently: short-term effects and long-term effects.
Short-term effects happen soon after that advertising goes out in the world.
For example, running a radio ad and people listening to it in the car on the way to the grocery store, and that specifically driving them to prefer one product over a different one.
Short-term effects can be measured with a variety of experiments or with MMM. The goal is to make the connection between the ad spend and the lift of purchases over the baseline, so we’re able to attribute those additional sales to that extra marketing activity.
At Recast, our models typically measure the impact of marketing spend over a 120-day window.
That’s our horizon: spend a dollar today, and we can confidently track its impact for four months. Beyond that, the signal decays, and it becomes much harder to separate the effect of a brand campaign from other factors.
The other type is long-term brand effects, and these are a lot more difficult to measure.
The idea behind them is that your ad spend has built up mental availability across a large swath of the population, and that means people are more likely to purchase from your brand six, twelve months, or even years from now.
Once you get out beyond 3-4 months, the effects tend to be very attenuated so they’re very difficult to estimate precisely and robustly with the model.
Challenge 2: Did the awareness bump drive more revenue?
Brand metrics are noisy and low frequency. Unlike daily sales data, brand awareness data typically comes from weekly surveys – and even those are imprecise.
So, even if your TV campaign moved the needle, it’s tough to prove statistically that your spend caused the measured shift in awareness.
But we’ve got a workaround.
The approach we use here is incorporating context variables—like brand awareness, consideration, or price sensitivity. These don’t directly measure the long-term effects of marketing, but they capture how brand strength shows up in performance.
Instead of asking, “Did channel X cause a lift in awareness?” we ask: “When awareness increases, how does that affect overall marketing performance?”
For example, as brand awareness grows, we often see both baseline sales and media effectiveness improve. Even if we can’t say exactly what caused the awareness to rise, we can model what that rise means for business outcomes. If unaided awareness jumps 5 points, maybe all your paid media becomes 5% more efficient. That’s measurable.
This lets us capture the essence of brand effects even when the full chain from spend → awareness → conversion is not possible.
So no, we don’t promise full attribution from brand spend to revenue. But we can show how brand strength amplifies your marketing system so you can make better decisions.
Challenge 3: Do long-term effects break weekly MMM updates?
You might have heard this:
“Because marketing has long carry-over, you shouldn’t refresh your MMM more than once a quarter.”
That’s just not true.
Every MMM has a “last date” – the most recent point in time the model sees. And yes, some marketing effects from that date will spill into the future. But that’s true whether you update the model weekly or once a year. You can’t avoid it.
At Recast, we’ve solved this by explicitly modeling future outcomes. Our Bayesian models exclude the last 60 days from the likelihood function, which means those data points inform future predictions, but don’t mislead the model about the timing of effects.
This avoids overfitting recent activity and gives our clients more accurate, stable results.
In conclusion, what MMM Can Do for Brand Measurement (And What Can’t It?)
MMM can’t isolate the impact of individual creatives. Or tie a billboard to a 3-point lift in consideration. The signal just isn’t strong enough.
What it can do is measure how shifts in brand strength – captured through proxies like awareness or price sensitivity – affect system-wide outcomes. When awareness rises, do your paid channels work better? That’s answerable.
It can also tell you when it’s time to test. If the model shows unusual efficiency in a channel that digital attribution ignores, maybe it’s time for a holdout test. If your brand metrics improve but your sales don’t follow, maybe it’s time to dig deeper.
While some claim to have solved it, the reality is that it’s an area that needs a lot more research and development (and we’re actively working on it). We’d rather be transparent on what MMM can and can’t do, so you can actually trust the model.



