Back in 2020, I was consulting for a B2B client with an enormous number of products. Their buyers are buying in bulk; they don’t buy one of an item, they buy 100 or 1,000 of an item. We did a series of A/B split tests, with different email templates, to find the ‘sweet spot’ – the perfect number of products to include in each email to optimize revenue.
I shared the methodology and results of the first A/B split test last month in ‘Case Study: 61% decrease in revenue-per-email (RPE), but we still learned a few things.’ Here I’m discussing the second test in the series. When I’ve given each test its own post I’ll circle back, connect the dots, and let you know where we ended up.
Remember – your mileage may vary. Case studies like these are great to get ideas of what to A/B split test with your own email list and creative. They aren’t a one-size-fits all solution to optimize revenue.
On to the test!
The control email is one that had performed well in the past, so we put it on the send schedule.
The product blocks appear in green in the image to the left. In this case, each block featured a product category (example: ‘Top Gourmet Gifts’), instead of just a single product; here’s what each product block included:
- An image of two to four items from the product category
- The name of the product category
- A text link to “view online”
As you can see, the control had four product blocks; we decided that we would test including six. For the test, we started with the control creative and added another row of product category blocks. The only difference between the two versions was the extra row of product category blocks (the first four product blocks used the same creative, focusing on the same four categories.
The hypothesis here was that showing more products in an email increases the likelihood that the recipient will see an item that is of interest to them. But as we were doing these tests, we kept choice paralysis in mind; that’s when too many options cause people to leave without choosing any of them.
So, if you read the headline, you already know the outcome – but what do you think the metrics were? What level of variance did we see in revenue-per-thousand-emails-sent (RPME)? 10%? 25? 50%? More?
The 6-product-block test brought in 15% less revenue than the control with 4-product-blocks.
RPME was our key performance indicator here (each cell (control and test) had equal send quantities).
You might wonder why we used RPME instead of revenue-per-email-sent (RPE). It’s because, in this case, the RPE would be under $1.00; when this is the case, I shift to RPME because it’s easier to see the variance when you’re looking at larger numbers.
The metrics can also tell us what caused the variance in RPME.
The largest contributor to the variance was the conversion rate from emails sent (CR). The test version brought in 21% fewer conversions (in this case, individual sales) than the control. A smaller percentage of people who received the test email went on to purchase.
There’s also a story to be told here about the average order value (AOV). In this case, the test creative bested the control, by 8%. So even those fewer people bought from the test, those who did buy spent more money; 8% more money, on average.
One more interesting note about this A/B split test. The diagnostic metrics, the open rates and click-through rates (CTR), were just about the same. The open rates were the same; the test version lagged the control in CTR, but only by 2%, which is within the margin of error.
The final word? It does appear that adding a third row of two additional product category blocks created decision paralysis; for this A/B split test, the control with four product blocks was clearly the winner.
Do your own ‘number of product blocks’ A/B split test and let me know how it goes! And watch this blog for the rest of the product block A/B split test series, as well as the wrap-up at the end.
Be safe, stay well, peace,