Why do tests fail?

net magazine - - FEATURES - by Erin Weigel, se­nior de­signer at Book­ing.com

‘We tested that, and it failed.’ This ex­cuse is ram­pant in the world of A/B test­ing, but it can over­look the fact that a con­cept in it­self is fun­da­men­tally dif­fer­ent from the ex­e­cu­tion of a con­cept. Ideas of­ten resur­face over time. Ones that failed be­fore tend to be la­belled as fail­ures, and they never make it out of a new gate.

As Book­ing.com has more than 10 years of A/B test­ing ex­pe­ri­ence, much has been tried. We’ve failed many times and won oc­ca­sion­ally too, but there’s al­ways room to im­prove. That’s why my re­ac­tion to dis­mis­sive state­ments towards a solid con­cept is, ‘OK … What ex­actly did you try and when?’

This cu­rios­ity stems from my ex­pe­ri­ence that there are far more ways to fail than there are to suc­ceed. This sen­ti­ment is rather pes­simistic – but with good rea­son. I’ve done enough tests, from con­cept gen­er­a­tion through tech­ni­cal im­ple­men­ta­tion, to grasp the com­plex­ity that could lead to a good idea’s demise.

Here are a few things I’ve seen make a good idea fail (you’ll find the full list at netm.ag/fail-286). Slight in­crease in page load time Edge-case bugs Noisy or low sam­ple size Poor trans­la­tion or copy choice A seem­ingly in­signif­i­cant flaw in your im­ple­men­ta­tion could have an im­pact just neg­a­tive enough to coun­ter­act any pos­i­tive ef­fect. So, re­mem­ber, a neg­a­tive or in­signif­i­cant re­sult doesn’t al­ways mean ‘no’, it some­times means ‘not quite right’.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.