What to Do When Your Marketing Experiment Isn’t Statistically Significant

If you’re treating p < 0.05 as the litmus test for your marketing experiments, it’s going to be really hard to build a solid measurement program.

Many marketers still blindly follow this rule:

  • p < 0.05 = success
  • p > 0.05 = failure

But a p-value doesn’t tell you whether your campaign worked, how big the effect was, or whether it was worth the investment. It only tells you how surprising the result would be if there were no effect at all — that is, if the “null hypothesis” were true.

That is not the same as understanding business impact.

And yes, we understand that this mindset is coming from classic A/B testing in product analytics: does variant A beat variant B? Yes or no?

But when you’re evaluating media performance, that… just doesn’t work. Almost all marketing has some impact. The real question isn’t “did it work?” – it’s “is this the best place to put our next dollar?”

Using p-values as a go/no-go gate creates real problems: teams run a test, don’t hit statistical significance, and just throw that away without learning anything. “No lift. No further action.” 

That’s such a huge missed opportunity! 

Even when a test isn’t conclusive, it can still tell you a lot about what’s likely, what’s ruled out, and what might be worth testing next.

In this article, we’ll unpack how to interpret inconclusive or non-significant tests, how to use directional ranges and ROI intervals to inform decisions, and how to reframe what a “good test” looks like when you care about budget efficiency.

How to interpret noisy or inconclusive test results

So instead of asking, “Was the result statistically significant?”, the better question is: “what range of outcomes is still compatible with the data — and what does that tell us?”

Let’s go through an example:

Say you pause a branded search campaign in 10% of markets and run a holdout test. At the end of the period, your results show:

  • Estimated incremental CPA: $400
  • Confidence interval: $250 to infinity

You didn’t hit statistical significance. But you did learn something useful. The data is not compatible with any CPA lower than $250, so you know branded search isn’t a low-cost channel for you (right now). 

Yes, you didn’t get a precise estimate, but it rules out the most optimistic assumptions and helps you recalibrate your model.

Here’s another one:

  • Estimated incremental ROI: 1.7x
  • Confidence interval: 0x to 2.5x

Here, you’ve bounded the upside. If your CFO needs 3x ROI for a channel to be viable, you can’t justify more budget here (again, right now). Even though the result isn’t statistically significant, you might want to deprioritize or rerun the test at a higher scale.

These kinds of directional insights are especially powerful in ambiguous or politically sensitive decisions where you don’t really need a “yes or no” answer. You just need a bit more clarity.

Inconclusive =/= inconsequential. 

The Problem with Midpoint ROI Estimates

We also get that midpoint estimates are seductive. A 4.5x ROI sounds great on paper – but it’s just one possible outcome in a range of possibilities. So why do marketers over-anchor on lift or point estimates? 

Because they’ve been trained on dashboards that spit out clean, precise numbers: “ROI = 4.5x.” It feels definitive. But it’s not. Without knowing the spread around that number, you’re flying (confidently) blind.

Take a bet that estimates 4.5x ROI with a confidence interval from 1x to 7x. 

That might sound good – but if your minimum threshold for profitability is 3x, nearly half of the distribution falls below your bar. 

Wide intervals, especially fat-tailed ones, are a red flag. They are not telling you “go scale this now.” They are saying “go run a better test.”

The sharpest marketers we work with don’t chase that midpoint. They make decisions based on the full distribution – the upside, the downside, and everything in between.

How to make smart decisions under uncertainty

In marketing, you will never get perfect certainty. And your job isn’t to eliminate uncertainty either – it’s to manage it.

Even individual-level randomized controlled trials – the gold standard used in medicine trials – come with sampling error and confidence intervals. Geographic holdout tests come with noise from market variation and spillover effects. MMM or synthetic controls come with some assumptions.

And still, you can make high-quality decisions under all of these conditions – if you use the right framework. Some of the things we believe can help marketers manage uncertainty better:

  • Quantify the full ROI distribution. Don’t just look at the midpoint. Ask: “what’s the worst-case ROI we can’t rule out?” If the lower bound is below your cost of capital, at least proceed with caution.
  • Frame decisions as bets. Instead of “Did it work?”, ask “Given this ROI range, would I invest $X here?”
  • Compare across options. Run ROI intervals for multiple channels. If one has a range of 2x–5x and another has 0.5x–3x, the choice is obvious – even if neither is statistically significant.
  • Stack evidence across time. Combine test results with model output (MMM), platform metrics, and prior tests. Even imperfect evidence can give you directional signals.

TLDR:

  • Statistical significance isn’t a final verdict. A p-value below 0.05 doesn’t tell you if a campaign worked or if the ROI is worthwhile – it just means the result is unlikely under the assumption of no effect.
  • Inconclusive results still carry direction. Even when a test isn’t statistically significant, you can use confidence intervals to rule out implausible outcomes and deprioritize channels that are below your ROI threshold.
  • Anchoring on a single lift or ROI number hides the true risk; work from the full range of outcomes to see if you should scale (or stop) with confidence.
  • Uncertainty is part of marketing and we can’t avoid it. We can reduce it and manage it but the goal isn’t perfection – it’s to be less wrong. 

About The Author