If your model is showing you one of the following scenarios, you’re probably dealing with an identifiability issue:
- ROAS estimates swing wildly with small modeling changes – like priors, lag windows, or carryover assumptions.
- Credible intervals for channel-level ROI are wide and overlapping, and everything “looks the same.”
- Fit to total sales looks fine, but the channel attribution keeps changing.
- Re-fitting the model on a slightly different date range flips which channel appears to be driving lift.
Identifiability answers a simple question: Given the data you have, is there only one realistically plausible explanation for how that data was generated?
If the answer is no – if many different stories can explain the same sales pattern – then your model is unidentifiable. It might still return results, but they just won’t be actionable.
Ironically, this is not a technical bug. Your model is doing the right thing. It’s just telling you: “I can’t tell who did it.”
The analogy we tend to use is a crime scene with ten suspects, where the only evidence is that “they were all in the building.” You don’t need more data in general – you need more separating data that lets you rule out competing explanations.
For marketing leaders, this matters because your decisions are causal. You’re not asking “what happened?” You’re asking: “If I add $100K to Meta next month, how much revenue will that cause?” And if your model is unidentifiable, it can’t answer that.
Multicollinearity: The Identifiability Killer Hiding in Your Time Series
For marketers running MMMs, identifiability issues almost always show up as multicollinearity.
It is one of the hardest problems in marketing analytics. Say you have two channels in your model – Meta and YouTube. You’re launching a new product, so you increase both budgets by 20%. Sales go up. Great. But which channel drove it?
The model can’t tell you. Both channels moved together, so there’s no way to separate their effects. The model might say Meta drove all the lift. Or YouTube. Or both equally. All those stories are statistically valid given the data.
In practical terms, multicollinearity makes you blind because the range of possible incrementality becomes too broad to make confident decisions.
Again, this isn’t the model being broken. If the underlying data doesn’t separate channel effects, no statistician can fix that.
So… how do you fix it?
Creating Identifiability: How to Give Your Model Something to Learn From
You need to deliberately break the patterns so the model has something to learn from.
Here are 4 different ways of how this could look in practice:
- Staggered budget moves: Don’t raise the budget for all your channels at once. Instead, you could up Meta spend in, say, April, while holding YouTube flat for the same period. Then, reverse it in May.
- Pulses and pauses: Run short on/off or high/low bursts in individual channels – especially powerful when paired with geo splits.
- Out-of-phase timing: Avoid coordinated peaks. Run your YouTube push in weeks 1–2, then lean into Search in weeks 3–4.
- Geo experimentation: Design regional tests with control and treatment groups. Even small differentials create big learning opportunities.
This takes coordination. Each channel manager wants to optimize independently – that’s what creates the problem in the first place. But accepting short-term friction is the price of getting answers you can actually use from your model.
TLDR:
- Identifiability determines whether your model can separate which channel drove results or if it’s just guessing between equally plausible stories.
- Multicollinearity is the most common identifiability killer in MMM, especially when channels scale together over time.
- You don’t need more data – you need better variation: staggered budget shifts, channel pulses, geo tests, and clean exogenous controls.
- If your data doesn’t create separation, no model can deliver causal answers.



