Most small e-commerce brands probably don’t need anything more complicated than digital tracking tools for measuring marketing performance.
But when the channel mix gets more complex and their marketing grows, digital tracking is simply not enough. After $100k+ / month in paid media budget, it’s a good time to start thinking about running some experiments and “building their testing muscle”.
Around the $5-10M in media spend, MMM might start to make sense for your brand but, even if you do the modeling correctly, it won’t work if your organization isn’t prepared to understand the results and how to use them.
As budgets get larger, the most important thing for an organization is to start talking seriously about incrementality and how to leverage experiments. You need to make sure your business is thinking about incrementality and understands the drawbacks of digital tracking so that, once it’s time to do sophisticated MMM, everyone understands why you’re embarking on that journey and what to do with the information.
So, as you scale, you need to start running some experiments, doing testing… The more that you can build that foundation now, the more you’re setting yourself up for the success of your future MMM projects.
How to build an experimentation mindset for your marketing team:
Changing the culture of an organization doesn’t happen overnight. Start small – you don’t need a flawless testing program from day one. But laying the foundation is critical. Start running incremental tests, perhaps annually, to measure the true effect of your marketing spend on any channel.
There are three common types of tests to get you started:
- The gold standard of testing remains the individual-level randomized control trial. They’re very expensive to do and take a lot of work, but they give the most reliable information.
- Geo-testing, where you define a subset of regions, turn off advertising in some of those regions and measure the difference in sales in the control vs. test regions. Easier to do in some channels but not as precise.
- Before-and-after tests, where you are spending in a channel and then turn off spend or dramatically increase spend to see what happens and compare the results before and after the change. Quite straightforward to run but they only give you an estimate of incrementality.
As your testing capabilities increase, remember that every marketing experiment is just a snapshot in time. A test run in February gives us key insights into how a channel performed… in February, but as we move further into the year, that test’s relevance goes down.
Everything evolves – your business, the creatives you’re using, the marketing platforms, and even your competitors. You need to test consistently.
How to measure the value of an experiment:
Testing can be scary for marketers because they often come with a mindset that if they make a bet and it doesn’t go right, it could be their job on the line. I empathize, but if you’re at an organization where that’s a concern, then maybe you’re at the wrong organization.
Let’s say you double the budget on a channel as an experiment and you see little to no return. You’re not “wasting” your money, you’re paying to reduce uncertainty. There is a dollar value to it because you’re now going to make smarter decisions that will make or save you millions of dollars.
Or let’s say you run a holdout experiment and you pull back investment in a channel. You might “lose” the opportunity cost, but you’re getting information on the incrementality of that channel that will be much more valuable.
The value of the test is not in the number of additional customers that you got or the number of customers that you potentially lost. The value of the test is in making more informed decisions in the future
We really want marketers to start to think about tests not in terms of what’s the cost of the test at that moment, but what’s the long-term value of the learnings.
Brands that do not want to place bets and get too worked up about making sure that no dollar is wasted actually find that most of the dollars will be wasted because they’re not able to get a good read on what’s truly working and what’s not.
Learning how to get good at marketing requires taking risks and making bets knowing that, by definition, some won’t pay off – but the information you gather will save you more money on the backend.
What’s great about the Recast platform is that we can help marketers be structured in their thinking about where we can and where we can’t get a read – and how can we phase in the test in a way that the business is going to be comfortable with, but that also gives us the learnings that we need about how truly incremental investment into that channel is.
How modern marketing teams think about testing:
When we think about modern marketing teams that get the most out of Recast, we think about forward-thinking marketers who don’t fall into the inertia trap where they just keep doing what they’ve been doing because it worked in the past, and no one wants to stick their neck out to make a change – even if what they were doing is not working so well anymore.
A lot of companies where Facebook really crushed it for them seven years ago still put a ton of money into that channel even though performance is getting worse and worse and worse and worse.
Everyone sort of knows that this is happening, but no one wants to go make the pitch to go try a new channel or try a new tactic. Those sorts of marketing teams really struggle with a tool like Recast because we are designed to help marketers place bets and try new things.
No one has a crystal ball, but we can help you place high-probability bets. 70% of the time it might work out, but that still means that 30% of the time it might not – and we have to all be comfortable with that.
We want to take calculated risks, go out, make changes, and be bold. Those are the types of customers that we really like to work with, because if you’re not ready to make a change to your marketing budget based on the new insight that’s coming from Recast, then we’re not going to create any value for you.