Updated: August, 2024
Many brands want to know what the long-term effects of their marketing efforts are. The theory is that advertising today may boost brand awareness, which in turn, might have long-term positive impacts for the business. Wanting to understand these long-term effects is good and important. However, measuring these effects correctly is very difficult and marketing analysts need to be wary of methods or vendors that over-promise what is possible.
It is of course very easy to include brand metrics as inputs in a media mix model (MMM). Trivially easy, in fact. You will definitely get results out the other side of the model, and it’s pretty easy to structure a model to make those brand metrics look like they have very positive impacts on the business. It’s also easy to structure a model to make those brand metrics look like they have very small or no impacts on the business.
What’s difficult, and what’s actually important in a media mix model, is to get results that are correct, and to have confidence that those results are correct.
Let’s break down the problem a bit. In general, when companies are interested in the value of “brand” or the long-term effects of marketing, they are using a mental model like this:
- Marketing activity today has long-term impacts on “brand awareness”
- That brand awareness impacts the likelihood to purchase many months or years into the future
So there are two impacts that the company would like to measure. The impact of marketing today on brand awareness, and the impact of changes in brand awareness on purchase decisions. (Note that here I’m using “brand awareness” in the very broadest sense, not necessarily an aided / unaided awareness survey question.)
Unfortunately, there is no scientifically validated way of consistently measuring the first part of the impact. We have seen many vendors make claims that they are able to measure how marketing efforts impact brand awareness, but based on a careful review of their methods and a simulation approach to validating their methods, we do not believe they are able to accurately measure these impacts.
The second of the impacts – the impact of brand awareness on business performance – is a bit more straightforward and Recast has an approach that has been demonstrated to consistently work when actual long-term effects are present. (This is done via “contextual variables”, which you can take a closer look at in our model docs).
So, why is the first of these impacts so challenging to measure in practice?
- Brand awareness is not easy to measure. It’s sort of a vague concept, generally accessed via surveys. Those surveys tend to be both expensive (leading to a low N size) and slow (leading to infrequent reads).
- Brand awareness doesn’t have much variation. With only a few exceptions, for most brands their brand awareness will vary only slightly over the course of years, and much of that variation is within the “margin of error” for the survey results.
These two factors combined mean that there is very little underlying signal in the data and an econometrician might say that the model is “unidentifiable”. Unfortunately, this doesn’t actually prevent an analyst from including a term in the model for “long term brand effects”, but what happens is that the analyst can effectively make that term say anything at all. Since there’s no underlying signal in the data, it’s trivially easy to specify a model that shows a high or low brand effect depending on which you prefer.
I want to double-click on the uncertainty since this is critically important to understanding the appropriate way to analyze brand awareness metrics. Every good brand awareness survey vendor should communicate to you both the average metrics as well as the margin of error. The margin of error tells us the uncertainty in the results. So we can imagine receiving data like this from our survey vendor:
- October Unaided Awareness: 35% +/- 5%
- November Unaided Awareness: 37% +/- 5%
- December Unaided Awareness: 39% +/- 5%
Is the unaided awareness trending up? No. Or, it might be, but these survey results don’t tell us that since all of these results are within the margin of error of each other. It could very well be that brand awareness is flat or has even declined over this period, and the data are perfectly consistent with that story.
Simply including the mean responses in a regression for a media mix model without incorporating the margin of error is a huge modeling mistake, which is unfortunately common with many econometrics vendors. You can read more about how this can go wrong in this blog post.
Because brand awareness metrics generally have very little movement (once accounting for measurement error), it’s very difficult for any statistical model to link marketing activity directly to those movements except in very specific circumstances (like when doing a large discrete TV campaign). However, in our practical experience even those types of campaigns often rarely move the needle on brand awareness and so there’s no way for a statistical model to find any pattern there.
What we generally recommend brands do is the following:
- Include brand awareness metrics as a “contextual variable” in the MMM. Note that if you’re not using Recast you need to make sure that you can tweak your model structure so that this has a multiplicative effect on base sales and marketing efficiency.
- Do ad-hoc analysis for specific marketing campaigns to see if they move the needle on awareness metrics. But make sure to appropriately include the margin of error from the poll in your analysis!
With these two pieces of data you can at least start to stitch together the story on long-term brand effects.
Conclusion
Long-term effects of marketing are very tough to measure. While it’s pretty straightforward to estimate the impacts of brand awareness changes on business performance (Recast can help you do this!), for most brands it’s not realistic to reliably estimate how marketing activity impacts brand awareness.