You may have heard the term incrementality in the context of marketing attribution, and wondered what exactly it meant.
Incrementality is about measuring what actions would not have happened without a specific intervention. For example if I spend money advertising my product, I need to know how many people bought my product because of the ad.
When someone visits a website and buys something, advertising platforms identify that person and look in their databases to check if they previously saw an ad. If so, they report a conversion event, crediting the ad the user saw or clicked on. That’s marketing attribution.
Let’s say I spend $10,000 on Facebook ads, and they report that people who saw or clicked on those ads went on to buy $30,000 worth of products. If my profit margin is 40%, I just made $2,000 return on a $10,000 investment, ROI = ((30000 * 0.4) – 10000) / 10000 = 0.2.
However, counting all the people who bought after being advertised to, is not the same thing as counting only the people who bought because of the ad. Correlation is not causation. Some of the people who saw my ad would have bought anyway, had I not run my campaign. This is incrementality.
It’s an extremely important distinction. If half of the people who purchased would have done so without seeing an ad campaign, suddenly the numbers no longer work out. The campaign goes from a money earner to a loss maker.
If $15,000 worth of sales would have happened anyway, it works out that instead of making $2,000, the ad campaign lost $4,000! If we only made an incremental $15,000 in sales, with a 40% profit margin that nets out to $6,000 return, less than the initial $10,000 investment ROI = ((15000 * 0.4) – 10000) / 10000 = -0.4.
In the beginning of a company’s life, attribution and incrementality are the same thing. If nobody knows about your product, and you run an advertising campaign, you can be sure to credit the spike in sales you get back to advertising. In this stage everything that works makes an obvious incremental difference.
However as a company matures, its marketing mix gets more complex. People hear about the product from friends, not just via advertising. They get press coverage, establish a greater physical presence, and expand into less measurable advertising channels like influencers or TV.
Now it’s no longer a simple exercise to tease out what the impact of a specific ad campaign was separate from everything else. There are too many conflicting variables that drive behavior to account for. If you have a well known brand, do a lot of retargeting ads, or get a lot of sales through affiliates, it’s likely many of your reported sales aren’t incremental.
Crucially there’s no analytics software or tracking pixel that can measure every interaction and touchpoint that led to a purchase. We can’t track word of mouth, identify who saw a billboard, or measure which influencer convinced them to buy.
Thankfully, we have an elegant solution. Randomized controlled experiments. We split our target audience into two groups, and only show the ad to one of the groups. That way, we know any difference in purchases between the two groups was down to the ad.
Because both groups were exposed to the same conditions, and randomly assigned to see the ad (or not), we can control for all of the other factors affecting sales, without even having to know what they are. Both the control and test will have conversions, but (hopefully) the test group will have more: the incremental conversions. Randomized controlled trials (RCTs) like this are the gold standard of proving one thing causes another.
In advertising most platforms offer incrementality testing functionality, where they handle the randomization and reporting of the experiment. If you want to run a study independently, you can test any platform by randomly showing an unrelated ad (i.e. for a charity) to the control group. If that ad still reports conversions, you know that proportion wasn’t incremental.
The structure of some channels make it impossible to switch on your campaign on and off for randomly assigned groups of people. Linear TV campaigns can’t track who saw the ad or didn’t, so the random assignment to groups is done by geographic region instead. Assign a share of well matched U.S. States to a control group which doesn’t get advertising, and compare results.
In some channels there’s no real way to run a true incrementality experiment. For example Google gives you no control over what version of the website ranks on their search engine results page, so experiments have to be grouped at the page template level. You could make a change to 50% of all category pages on your website, and see if those 50% rank higher in Google than the others.
Even when running an experiment is possible, it’s not always feasible. Experiments come at a real cost to set up and maintain. Not just your team’s time and attention, but also the opportunity cost of turning off marketing for a proportion of your audience. Whatever share of the audience didn’t get to see your marketing, will buy less than the people who did.
Given you can’t run experiments for political or economic reasons, modeling is the preferred solution. Causal inference models are growing in popularity due to a recent Nobel Prize. In some scenarios it’s possible to simulate the conditions of an experiment, using a regression discontinuity, instrumental variable or natural experiment.
However the most popular and practical method in regular use since the 1960s is Marketing Mix Modeling (MMM). Matching spikes and dips in sales to events and actions in marketing, MMM can provide an estimate of the incrementality of each channel. Using modern Bayesian MMM it’s possible to calibrate the accuracy of a model based on the results of incrementality tests.
Marketing attribution is a complex topic, and the question of establishing incrementality isn’t going to be a solved problem in our lifetime. No experiment or model is perfect, but it’s possible to significantly reduce our uncertainty when making budget decisions, using these methods. The goal isn’t a perfect distribution of credit across each campaign, you just have to be less wrong than the competition about what’s working.