One of the most critical factors in building an accurate media mix model comes down to choosing the optimal historical window that it will digest. Too little data, and you risk missing important historical trends. Too much data and you introduce outdated information that no longer reflects current marketing performance.
Recast uses 27 months of historical data to strike the right balance between capturing seasonality, accounting for marketing carry-over effects, and making sure we add a proper burn-in period so that the model isn’t biased by missing pre-period data.
In this article, we’ll break down exactly why 27 months is the sweet spot and why going further back can lead to diminishing returns.
The 27-Month Requirement: Why Recast Needs This Specific Window
Two Years (24 Months) to Capture Seasonality
Seasonality is one of the biggest drivers of marketing performance. To accurately measure seasonality, a model must see at least two full cycles of data. If sales spike every July, the model needs to see it happen twice to confidently attribute that pattern to seasonality rather than an anomaly.
Without two years of data, the model has too much uncertainty about whether fluctuations are driven by actual trends or random variation. With two full years, it can recognize recurring seasonal effects and adjust accordingly so that its marketing impact isn’t over or under-credited.
Three Additional Months to Account for Burn-In Effects
Marketing doesn’t operate in a vacuum—its effects carry over across time. A TV ad today might still be influencing sales 30, 60, or even 90 days from now. This creates a problem at the very beginning of a dataset, where the full impact of past marketing efforts is missing because there’s no earlier data available.
To handle this, Recast applies a 90-day burn-in period. During this time, the model uses spend data but does not estimate marketing effectiveness from it. This lets the model establish a foundation before it starts attributing sales to marketing spend.
This is actually a very common problem in MMM – the model incorrectly assumes that early sales are entirely organic because it lacks visibility into prior marketing activity. By including a burn-in period, we make sure that we measure incrementality accurately from the start of the usable dataset.
Why Not Go Further Back? The Limits of Historical Data
It might seem logical to provide more than two years of historical data—why not five? But going further back often introduces more problems than it solves.
1. Marketing Performance Changes Over Time
Marketing effectiveness isn’t static. A Facebook ad that worked well three years ago may be much less effective today due to changes in audience behavior, ad fatigue, platform algorithms, and competition. If we use very old data, we risk introducing outdated assumptions into the model.
2. Older Data is Less Reliable
The further back you go, the harder it is to ensure clean, accurate data. Marketing tracking methodologies change, reporting standards evolve, and agencies may not keep consistent records. We’ve seen this too often. Eventually, these inconsistencies just add noise to the model.
3. More Data Increases Computation Time
Recast updates models weekly so you can operate it in flight. Adding years of additional data slows down computation without adding any significant value.
After testing and iterating here over the years, we’ve found that two years of accurate data is all you really need here.
The Myth of “Last Date” and Update Frequency
As a final note, we’ve seen “hot takes” that MMMs shouldn’t be updated frequently because marketing effects can take months to materialize. The reasoning goes that if campaigns have 60-90 day carry-over effects, then updating the model weekly would cause it to miss their full impact.
This logic is flawed. No matter how frequently an MMM is updated, it always has a last date—whether that’s last week, last quarter, or last year.
If a model is incorrectly attributing recent marketing spend due to carry-over effects, the problem exists regardless of update frequency – a poorly calibrated model will misattribute marketing impact, whether it’s refreshed weekly or annually.
Recast solves this by always estimating the future impact of current marketing activity. Recent campaigns aren’t unfairly penalized just because they haven’t fully materialized in the sales data yet. This make sure that MMM results are stable, reliable, and actionable, no matter how frequently the model is updated.
TL;DR
Recast’s 27-month approach has:
✅ two full years of data capture recurring seasonal trends.
✅ a 90-day burn-in period ensures early sales data isn’t misinterpreted.
If you’re considering an MMM, here are three questions to ask your vendor:
- How do you handle seasonality?
- Do you apply a burn-in period?
- How do you make sure recent marketing activity isn’t under-credited?
The right answers to these questions will help you get an MMM that delivers accurate, actionable insights you can actually use to drive better marketing decisions.