Why Marketing Forecasts Fail When Budgets Change

A brand we were working with missed its forecast by $10M. Immediately, you’d think that Recast made a bad forecast but, when we looked into what happened, the planned mix was $15M TV / $25M Facebook vs the executed mix which was $5M TV / $50M Facebook. 

That’s a fundamentally different marketing program, and it’s no surprise the landing point changed.

A forecast is a conditional statement. If you spend X in channel Y over a given period, you should land around Z. That “if” matters. The moment actual spend diverges, the forecast you’re working with is no longer the forecast you’re actually running.

In this article, we will discuss forecast adherence and re-forecasting from both an operational and cultural perspective. We will also share a tactical playbook on how to protect the integrity of your forecasting.

Why forecasts miss: spend deviation vs. performance deviation

When forecasts miss, it tends to be one of two problems:

  1. Spend deviation (input change): Did we spend what we said we would spend, where we said we would spend it? If not, the first-order explanation is simply that the inputs changed.
  2. Response deviation (performance change): Holding spend constant, did the business respond differently than expected (pricing changes, promo intensity, competitive shocks, creative fatigue, etc.)?

We’ve found that leaders tend to treat adherence as something the team will “figure out” once the quarter starts. In reality, “can you follow the plan?” is the gating factor. The forecast is only as realistic as your ability to deploy dollars the way you said you would.

It’s very important that you ask the team: “Realistically, can you actually deploy these dollars? Do we have the operational capacity to launch into new channels?” Will we have enough creative assets? And do we have the internal willingness to actually follow the budget?”

Let’s say a team commits to ramping upper-funnel (TV, radio, OOH) to diversify away from paid social. Week 2 arrives, and creative isn’t ready, legal is still reviewing claims, and the media buy didn’t lock. Meanwhile, demand needs to be hit this week. So spend is moved to the easiest channels to deploy money into – paid social and paid search. 

Meta and Google end up getting the lion’s share of the budget just because… they’re easy. Even if they’re not the most incremental channels or what the MMM recommended, budget moves to them, and we’ve lost the forecast adherence.  

To prevent this, senior teams should demand specifics before they sign off on a forecast:

  • Channel ramp plan: what must be true operationally (lead times, approvals, inventory) for each line item to happen on schedule.
  • Creative capacity plan tied to the spend curve: asset counts, formats, refresh cadence, and who is responsible for delivering them.
  • Adherence scorecard: planned vs. actual by week, with a written reason for every variance.

If you can’t operationalize the plan, you just don’t have a forecast.

One-and-done forecasting is the second failure mode: re-forecasting is the job

Even teams that build a realistic plan still break forecasting because they treat it like a one-and-done exercise.

Three weeks into the quarter, they’d already deviated from the plan – TV didn’t ramp up, Meta got all the budget – but the team kept using the original forecast anyway. The org ends up arguing about why the forecast missed, when the real issue is that the forecast wasn’t updated to reflect the program that’s actually running.

The fix here is operational – continuous re-forecasting. Every week, you update the landing estimate using (1) actual spend-to-date, (2) the remaining quarter budget, and (3) the current mix reality. If you’re changing the inputs, your expected outcome should change too.

You can lock a plan on January 1st, shift the spend mix by February, skip weekly re-forecasting, and expect that July 1st forecast to land. You can’t. A weekly re-forecast would have surfaced the new landing point immediately and given you time to course-correct, reset expectations, or re-plan the remaining spend.

Now, let’s be clear: we’re not saying that you have to keep the budget allocation static and never change. It’d be ideal, but it’s not realistic. You’re going to reallocate. A launch will slip. Creative will arrive late. The problem isn’t the change. The problem is not reacting to that change. 

TLDR:

  • Most “bad forecasts” are really plan-vs-actual problems. If you didn’t spend where you said you would, you’re judging a forecast you didn’t run.
  • Forecast adherence is an operational constraint: creative throughput, channel lead times, and internal willingness determine whether the plan is even executable.
  • Re-forecasting is crucial – update weekly on actual spend-to-date plus the remaining quarter.
  • To make budget changes forecastable, enforce change-control rules (budget update = forecast update), and build a culture that can talk risk/expected value with finance.
Scroll to Top