Email split test results identical? Do this…

TwinsOh the disappointment. You’ve done everything right, followed email A/B testing best practices, eagerly awaited the test outcome, plugged the results into a split test calculator  and boom, there it is, the calculator tells you no difference in result.

All is not lost, this post is about what you should do next.

Experience and skill play an important part in picking what to test, however everyone gets tests that seem to fall flat and the truth is that testing is also an ‘at bats’ game. The more tests the better, just running an occasional test is not enough to make significant improvement to ongoing results.

As a guide it’s normal to see a third of test variants produce an uplift, a third no difference and a third performance decrease. Optimization is about repeated testing and re-application of what’s learnt each time.

So what can you do if the test results are identical?

  1. Repeat the test with a larger sample size
  2. Review differences in other metrics
  3. Create a new hypothesis

Repeat with larger sample size

If there was a difference but with an inconclusive statistical significance, say 70% or 80% then run the test again with a larger sample size. The larger sample size will allow you to establish if the difference was randomness or real.

Can’t re-run the same test? Often you may not be able to repeat the identical test with a larger sample size since the campaign has gone. However, you can repeat a test of the same hypothesis again.

For example if the test hypothesis was that promoting multiple products in the subject line is better than a single product, then whilst you might not be able to test again with the same products, the multiple product hypothesis can be tested again.

Though remember tests aren’t free. Every test takes data, time and resource. So rather than getting hung up on proving this test consider moving onto something else that could be more valuable.

Review difference in other metrics

If the conversions show no difference what about working back in the funnel to see if there was any differences earlier in the sequence. What about clicks or opens? Or how about unsubscribe and complaint rates?

Whilst these metrics don’t represent the primary objective there may be differences that can give insight and allow you to learn from the test.

Check clicks for individual links or groups of related links. I often get much more insight from split tests by reviewing how many clicks were on each call to action and differences at individual link level between test cells.

For example, in this email format test there were identical unique click results but I found there was still something to be learnt by drilling into exactly what links were clicked.

Create a new hypothesis

The test result shows your original test hypothesis was wrong. Now consider why it was wrong. Something you thought was important simply wasn’t. But why? Form an alternative test hypothesis for your next test based on the new information you have.

Ultimately the test you made didn’t impact the decision process of your customer. A small modification to your previous test is unlikely to be more successful, challenge yourself to think very differently and consider more significant changes to psychology of the email.

Finally, if you are still struggling to work out what your customers are thinking then run a feedback survey. With one client we’re currently running a post purchase survey that goes out automatically to every new customer to understand why they purchased. The results are proving valuable and we plan to use the insight to guide future testing.