We’ve seen teams think their TikTok tests had failed… until the model showed those campaigns were driving lift on Amazon and retail.
Or a team that assumed linear TV was too expensive… until they tracked its downstream impact on branded search.
Why does this happen? How was TikTok doing its job, but not inside the dashboard, it was being judged on? Why wasn’t TV converting in the channel report, while simultaneously making lower-funnel activity cheaper and more scalable?
Marketing doesn’t operate in silos, but measurement usually does. And that’s how good campaigns get cut. Most reporting systems are built to grade channels in isolation. They’re great at telling you what’s easy to see and terrible at showing where demand is actually being created. They’re systematically biased toward whatever can claim credit cleanly.
The biggest challenge here is that, sure, you cut the channel and nothing really happens – immediately. No KPI turns red immediately and screams that you’ve cut a channel that you shouldn’t. They show up as a slow erosion of demand; by the time you notice, you might not even realize what caused it because it’s seen for so long.
If you don’t understand the second-order effects of a channel, it’s basically impossible to scale it down with confidence. Let’s break that down.
Why it happens: halo effects + last-touch bias create a systematic budget distortion
This shows up in almost every omnichannel brand once you look for it.
First: halo effects are real. A halo effect is one marketing channel driving results in multiple distribution channels. A customer might see an ad, browse on your DTC site, then buy later on Amazon, or in a physical store. In an omnichannel world, that spillover is the norm.
Because most teams are judging channels inside channel-level views, they miss the spillover. Then, the upper-funnel gets cut because it underperforms on last-touch metrics or looks weak in-platform. And, simultaneously, lower-funnel gets overfunded – paid search, retargeting, retail media – because these channels are closest to conversion and often overstate their impact.
The reframe here is that per-channel ROAS is not a business outcome. It’s a biased slice of the journey that is biased toward whatever is closest to checkout. If you optimize for that, you’ll starve the channels that create demand.
This is the set of questions senior leaders should be asking before a “rational” cut becomes a demand problem:
- Are we cutting demand creation because we can’t measure it cleanly?
- Which channels are actually incremental across DTC + Amazon/3P + retail/in-store?
- Where are we double-counting impact because each dashboard tells a different story?
How to prevent it: incrementality + interaction discipline
If siloed measurement is how you accidentally cut demand creation, the fix is to focus less on per-channel ROAS and more on total business outcomes. You don’t need every line item to “shine on its own” if the portfolio is driving incremental growth across endpoints.
Practically, two methods do most of the work when used together:
- Lift testing (often geo tests vs. matched controls) isolates causal impact in a narrow window. It’s one of the cleanest ways to answer the question, last-touch can’t: did this upper-funnel activity drive incremental conversions in places we can’t track well?
It’s especially useful when you think there might be spillover happening – e.g., awareness campaigns that don’t “convert” in-platform but could be driving incremental lift on Amazon or in-store.
- MMM provides the broader, top-down view. It uses historical data to estimate incrementality by channel while factoring promotions, seasonality, and other external forces. Crucially for omnichannel brands, MMM can also estimate interaction effects like when “linear TV drives the ability to spend more into brand search.”
But this only works if you’re disciplined about interactions. Don’t try to model every interaction across every channel. That creates too many parameters, not enough signal, and an unstable model you can’t trust. Instead, focus on well-studied interactions: top-of-funnel channels driving bottom-of-funnel activity (TV → brand search, awareness → assisted conversion).
A couple of final nuances worth mentioning:
As we said, halo effects are real. But they shouldn’t be an excuse for bad measurement. If the only argument for keeping spend is that we believe it has a halo effect, it’s just not enough.
Also, MMM is top-down. Be skeptical of vendors claiming creative-level lift; often it’s arithmetic, not modeling, or they’re dressing up their own assumptions.
TLDR:
- Siloed measurement gets your demand-creation channels cut, and you don’t notice until sales soften somewhere else (Amazon, retail, branded search).
- Last-touch and in-platform ROAS systematically over-credit what’s closest to conversion and under-credit what creates demand upstream.
- The fix is an incrementality-first portfolio approach: use lift tests + MMM to measure cross-endpoint impact and a small set of defensible interactions.



