In marketing, every forecast comes with some level of uncertainty – whether you’re predicting sales growth, estimating the impact of a new campaign, or adjusting your media mix.
There’s no crystal ball and, too often, marketers fall into the trap of treating forecasts as absolute, when in fact, they represent a range of possible outcomes.
Human nature, ironically, doesn’t manage uncertainty well. We tend to focus on single-point estimates and struggle to incorporate probability distributions into our decision-making processes. Studies have shown that people don’t naturally weigh different probabilities accurately when making decisions.
When you transport this to the marketing world, it can really hurt the way you make informed marketing decisions
This article covers how to understand and navigate uncertainty better, especially around MMM.
1. The Importance of Uncertainty in Forecasts
Good forecasts should come with uncertainty. This might seem counterintuitive to those who want definitive answers, but predictions that come without uncertainty estimates are inherently unreliable.
Consider weather forecasts. It’s more useful to know there’s a 40% chance of rain tomorrow than a definitive yes or no. The forecast may frustrate you if you’re planning a beach trip, but over time, you can evaluate whether it rains 40% of the time when such a forecast is given.
This idea of calibration—ensuring the forecast reflects the true uncertainty—helps decision-makers set realistic expectations and evaluate long-term accuracy.
Similarly, in marketing, it’s important to communicate uncertainty in forecasts clearly.
At Recast, forecasts include confidence intervals or ranges that account for possible outcomes.
If the model predicts a 10% uplift in sales, it might also estimate that the actual uplift could be anywhere between 5% and 15%. This range is critical to understanding how certain—or uncertain—the forecast is.
2. Ignore Uncertainty At Your Own Risk
Ignoring uncertainty is especially prevalent in A/B testing, where companies often focus on the mean result without considering the range of possible outcomes.
For example, imagine running an experiment that shows a 5% lift in sales with statistical significance. Excited by the result, you bake that 5% lift into your next-quarter forecast, expecting a noticeable bump in performance.
But what happens if that lift doesn’t materialize?
In reality, the 5% lift could have been part of a much wider range, such as 0.5% to 9%.
You should always be looking at the range of the uncertainty interval and only checking the mean last. This will help you approach forecasts more conservatively – if the estimated uplift from a campaign ranges between 1% and 10%, you need to understand that the lower end is in fact a possibility and have a plan for it.
Don’t let uncertainty ranges make you overconfident and have unrealistic expectations.
3. Handling Incrementality Testing and ROI Calculations
Uncertainty also plays a crucial role in incrementality testing. In these tests, businesses are often thrilled to report results like a 3% lift in performance, but they frequently overlook a critical component: how much was spent to achieve that lift.
Incrementality studies need to go beyond lift percentages and include return on investment (ROI) or cost per acquisition (CPA) numbers, along with uncertainty intervals.
For example, an incrementality test might show an incremental CPA of $75, but the uncertainty intervals might range from $25 to $750. This range is crucial because it tells you how reliable the estimate is.
4. Why Marketers Should Demand Uncertainty Estimates
Every marketer should ask their MMM vendor if they can easily get uncertainty estimates in the form of confidence or credible intervals for the parameters estimated in the model. If the answer is no, that’s a red flag. Good models should communicate uncertainty, and any vendor that avoids providing uncertainty intervals is likely oversimplifying their model’s results.
Recast addresses this challenge by always presenting uncertainty estimates and credible intervals to help marketers understand the full range of possible outcomes. We want to encourage decision-makers to account for uncertainty even if it isn’t intuitive, so we present the intervals alongside point estimates, for example.
We also allow marketers to choose whether they want to be aggressive or conservative, which reflects the underlying uncertainty without overwhelming the user.
Media mix modeling, when done correctly, provides not only answers but also questions. If the model reveals a high degree of uncertainty about the impact of a particular channel, that’s a signal to investigate further, run experiments, or gather more data.
Uncertainty should be a tool, not a limitation.