How to Measure Affiliate Marketing

How to measure affiliate marketing

Affiliates are really hard to measure. Brands that do affiliate marketing will often face these three main challenges:

1 – The realm of affiliate marketing is vast. It ranges from coupon sites to more intricate partnerships, like a strategic collaboration with BuzzFeed. Each offers its own value proposition, and they often need to be measured differently.

2 – Another dilemma is understanding the true driver behind a purchase. Let’s say your affiliate offers a discount and customers buy through them. Would they have made the purchase without that discount? Did you lose margin on a sale that you would’ve gotten anyway? How incremental was it?

3 – They overcredit this channel because they model it wrong. You can’t model branded search and affiliates by treating them just like any other channel. We’ve seen it – modelers will plug their data directly into the MMM and what will come back is that those channels are incredibly effective. 

Looks great, but it’s not based on the ground truth. Here’s why:

Why MMM Overcredits Affiliates:

Let’s start with this – there are two types of affiliates, and they are measured completely differently:

Upfront Payment Affiliates: 

Some partners might be remunerated based on impressions or engagements rather than conversions – for example, brands paying an affiliate X amount of dollars to promote them to their email list. 

Upfront affiliates are the easiest of the two types, and their impact should be analyzed similarly to traditional marketing channels. Not too problematic.

Performance-based Affiliates: 

Here’s where things get complicated. These affiliates are only paid upon successful conversions. Their performance can be evaluated based on conversion rates and the quality of the leads they generate.

In standard marketing channels, there’s a clear flow: marketing spend leads to impressions and then eventual conversions. Affiliate marketing, however, often turns this on its head. Any affiliate spend that is going out the door is very tightly related to your revenue since there is only spend when there is a conversion.  

If you just include those channels in your model, they’re going to soak up way too much credit because they’re not truly driving incremental conversions – or, at least, they might not be.

Illustratively, Recast’s early attempts to integrate affiliate marketing into its model were met with a conundrum. The model became too literal, often only capturing the direct payment for affiliate conversions without offering deeper insights into incrementality. This highlights a structural challenge in applying traditional MMM to an affiliate’s closed-loop system. 

We had to change this because that’s just not how the real world works.

MMM and Experiments for Affiliates

The problem is that those channels are very tightly related to the amount of revenue or the number of conversions that are being driven.

Instead, you want to have a model that understands the way those channels actually work and how they’re different from other types of paid media channels. It’s very important to actually take into account the true causal structure as opposed to just dropping all of those columns into a linear regression and getting back very biased results. 

Experiments: Geo-Testing and Hold-Out Testing

It’s very challenging to verify results on affiliate channels because they’re very difficult to experiment with. 

Geo Testing in Affiliate Campaigns

Brands have tried to do geo-testing when dealing with affiliates – especially those that have strong regional influences or promote products with regional appeal. By segmenting campaigns geographically, marketers could monitor the performance of affiliates in specific regions vs others with no spend.

For instance, if a particular affiliate is believed to have a strong following in the Midwest, a campaign could be launched solely in that region and its performance compared to a similar region where the campaign wasn’t executed. This regional analysis can provide granular insights into an affiliate’s influence and effectiveness.

But that’s not a perfect experiment. Hard to tell your affiliate partner to only show a link in your blog post to people coming from one half of the United States and not the other half of the United States. It just doesn’t work that way.

Holdout Testing in Affiliate Marketing

The other option is to turn off your affiliate program for some amount of time and have a before-and-after test so you can see what happens. 

You’d take one group of customers that you would generally target an ad with, and then you split them into two, and you show ads to one group and not to the other group to estimate incrementality. 

Post-campaign, behaviors of this unexposed group are compared with those who engaged with the affiliate’s content. If there’s no significant difference in conversions or sales between the two groups, it could indicate that the affiliate campaign didn’t provide substantial added value. 

However, that’s not really possible with a lot of affiliate sites. You can’t really tell your affiliate partners to only show ads to this one ID and not to these other IDs. 

They’re not really set up to do that, it’s very noisy, and not all affiliate partners are able to do that.

Does Recast MMM measure affiliates?

I believe Recast now has a smart structure around how we think about modeling their impacts but, in the effort of full transparency, it’s not easy to verify if we’re really getting to the ground truth or not.

We’d rather be upfront with our clients and help them deal with the uncertainty affiliates create than model it erroneously to hide this and give them wrong data which will lead to very biased budget-allocation choices.

About The Author