Every marketer wants to know what’s working. But in the world of time series analysis, one of the most misleading tools gets used the most: Granger causality.
It sounds like a test for cause and effect – but it’s not.
This article unpacks what Granger causality actually measures, why it’s so often misunderstood, and how to avoid mistaking predictability for impact.
What Granger Causality Actually Measures (and Why It’s Misleading)
Granger causality is a time series technique that asks one very specific question: if I know something about variable X today, does that help me predict variable Y tomorrow?
That’s it. It doesn’t ask whether X caused Y. It just checks whether past values of X help forecast future values of Y. In practice, that makes it a useful — but limited — tool in forecasting.
And yet, the name is misleading. It sounds like a causal test, and it feels like one too — especially because it’s structured around time. Since causality flows forward, it’s easy to assume that if X predicts Y, then X must be causing Y.
But that logic breaks down quickly.
Imagine a time series flagging holidays like Valentine’s Day (1 on the holiday, 0 otherwise) and another showing gift purchases. Because people shop ahead of holidays, the purchase data might “Granger-cause” the holiday series. That is, purchases predict holidays. Obviously not true.
The test doesn’t ask whether changing X will change Y — it asks whether X contains signal about Y’s future. That’s a big difference.
This misunderstanding gets dangerous when marketers start using Granger results to make budget decisions. Because the test doesn’t control for hidden variables or reverse causality, it’s easy to mistake correlation for causation and end up optimizing spend based on noise.
When Granger Causality Can Be Useful (and When It Can’t)
Granger causality can still be useful — if you treat it as a forecasting tool, not a causal one.
Say you’re building a near-term sales forecast and want to know which signals are leading indicators. A Granger test might show that Facebook spend leads next-week revenue, or that organic clicks predict branded search volume. That’s helpful for planning. You can stack-rank predictors and get ahead of short-term shifts.
But that’s where its value ends.
If you try to use it for budget decisions — like “Granger says Google spend causes sales, so let’s double it” — you’ll run into trouble. You haven’t ruled out reverse causality, shared confounders, or spurious correlation.
Want to know whether Meta drives incremental sales? Run a lift test. Want to understand how a channel works over time? Use a causal model with lag structure and priors. Want to predict what happens if you shift dollars from YouTube to TV? You need an MMM with a generative structure and out-of-sample accuracy checks.
Granger can tell you what might happen next. It can’t tell you what to do. Use accordingly.
Recap: What Granger Causality Really Tells You 🔁
- Granger causality checks predictability, not causality
- It asks: does X help forecast Y? — not: does X cause Y?
- In marketing, this can lead to misattributed budget decisions
- Use it only as a forecasting tool, never for causal inference
- For true incrementality, rely on lift tests or MMM