Why do tests fail?
‘We tested that, and it failed.’ This excuse is rampant in the world of A/B testing, but it can overlook the fact that a concept in itself is fundamentally different from the execution of a concept. Ideas often resurface over time. Ones that failed before tend to be labelled as failures, and they never make it out of a new gate.
As Booking.com has more than 10 years of A/B testing experience, much has been tried. We’ve failed many times and won occasionally too, but there’s always room to improve. That’s why my reaction to dismissive statements towards a solid concept is, ‘OK … What exactly did you try and when?’
This curiosity stems from my experience that there are far more ways to fail than there are to succeed. This sentiment is rather pessimistic – but with good reason. I’ve done enough tests, from concept generation through technical implementation, to grasp the complexity that could lead to a good idea’s demise.
Here are a few things I’ve seen make a good idea fail (you’ll find the full list at netm.ag/fail-286). Slight increase in page load time Edge-case bugs Noisy or low sample size Poor translation or copy choice A seemingly insignificant flaw in your implementation could have an impact just negative enough to counteract any positive effect. So, remember, a negative or insignificant result doesn’t always mean ‘no’, it sometimes means ‘not quite right’.