The Hidden Truth Behind Incrementality Testing Results

Is your incrementality testing partner hiding something from you?

We’ve seen it too many times: brands come to us very proud of running an incrementality study and they share the results with us and say something like: “Look, we ran an incrementality study and there was a 3% lift and the result was statistically significant!”. 

After congratulating them for actually running an incrementality study, we, unfortunately, have to tell them that the results they shared are totally meaningless and aren’t useful for anything without more information.

So why are the results meaningless? The problem is that as a business you don’t care if a marketing channel has “any positive impact” you care if a marketing channel has a profitable impact. The problem with a number like “lift %” is that it doesn’t take into account how much money you spent to get that lift. 

In order to actually interpret the results of the test, you need to convert the lift into an incremental ROI or an incremental CPA number. That 3% lift from your original result could be highly profitable or highly unprofitable depending on how much you had to spend to get that 3% lift.

But you’re still not done! The next problem is that we now need to know the uncertainty intervals associated with the results of this experiment – they tell us about the range of values that are consistent with the experimental results.

We’ve had customers share with us results where they say “Our incrementality experiment indicates that the channel has an incremental CPA of $75”, but then when we look under the hood at the details of the experiment it turns out that the uncertainty intervals range from a CPA of $25 to $750. 

What this means is that the results of the experiment are consistent with an incremental CPA covering that full range, from $25 to $750. The $75 just happens to be the midpoint of the range before converting from ROI to CPA, but isn’t in any sense the “most likely” true value. 

Once we explained this to them, they realized that the test they had run actually wasn’t very informative because it didn’t have enough statistical power to meaningfully differentiate between a profitable CPA (under $100) and an unprofitable CPA. 

This was unfortunate for them to learn, but it was also unfortunate that without having Recast there to help them interpret those test results they would have been totally led astray. 

So, if you’re running incrementality lift tests make sure that you’re looking at the full uncertainty intervals for incremental CPA or ROI estimates. Anything else is likely to lead you astray. 

And if you’re working with an external vendor, definitely try to work with a vendor that makes these estimates transparent so that you can make good decisions!

About The Author