User ex­pe­ri­ence is more than data col­lec­tion. It’s about un­der­stand­ing the mo­ti­va­tion be­hind user needs and strik­ing a strate­gic bal­ance be­tween ex­pec­ta­tions and busi­ness needs

net magazine - - CONTENTS -

Joshua Garity ex­plains the easy way to un­der­stand your cus­tomers

U ser ex­pe­ri­ence is not magic. You don’t run a sim­ple test that Becky the mar­ket­ing in­tern read a blog about once and un­cover quick-fix so­lu­tions to gen­er­ate huge growth. UX strat­egy is a sci­ence: a sci­ence that has been around since long be­fore the first com­puter blipped into ex­is­tence and long be­fore UX be­came a buzz­word in the Nineties.

All sci­en­tific the­o­ries be­gin as a hy­poth­e­sis. The as­sump­tion of pur­pose. Why are these events hap­pen­ing? Then you test the hy­poth­e­sis by col­lect­ing data to val­i­date, or in­val­i­date, the hy­poth­e­sis. It then be­comes a the­ory.

A the­ory is a val­i­dated ex­pla­na­tion of why some­thing is hap­pen­ing. A the­ory is not based on bias nor is it based on what the loud­est per­son in the room is say­ing; it’s based on fac­tual data col­lected through a repli­ca­ble method. You know. Be­cause sci­ence. With­out that struc­ture, it’s easy to run a test and fall­back on con­fir­ma­tion bias, or data ma­nip­u­la­tion, to get the feed­back you want. That’s not how this works. We don’t con­trol the out­come. We find a means to com­mu­ni­cate the com­plex nu­ance of user be­hav­iour in a sim­ple way. Some­times the data proves us wrong and that’s okay. The goal isn’t to al­ways be right; it’s to un­cover the facts.

User data so­lu­tions like Google An­a­lyt­ics rely heav­ily on as­sump­tion. You can ex­port records and use a ser­vice like IBM Wat­son to find cor­re­lat­ing trends. How­ever, don’t con­fuse data with fact. Pre­dic­tive mod­el­ling or as­sump­tions are the first step, but they don’t an­swer the golden ques­tion of why. Why a user is mo­ti­vated to take an ac­tion is the cen­tral fo­cus of UX.

This is the in­her­ent prob­lem with user ex­pe­ri­ence. Every­one thinks they have all

the an­swers. UX then be­comes guided by per­cep­tion bias.

Think of it this way. The sales team thinks they know what cus­tomers want to buy and the mar­ket­ing team thinks they know how to con­vince cus­tomers they want it. Man­age­ment has an ap­proved bud­get based on what they as­sumed the teams would need a year ago and it likely didn’t in­clude bud­get for UX re­search. Sound fa­mil­iar?

Each or­gan­i­sa­tion, de­part­ment or em­ployee has their own per­spec­tive on what should be done based on their own ex­pe­ri­ence with cus­tomers. The prob­lem is they’re all right. The big­ger prob­lem is that they’re all wrong too.

Or­gan­i­sa­tions that fall into this per­cep­tion trap of­ten find them­selves avoid­ing the con­flict of a heated de­bate and try to serve every­one. The prob­lem with try­ing to serve every­one is that you’re not serv­ing any­one.

The job of user ex­pe­ri­ence is to re­move that bias and help the group to un­der­stand a big­ger pic­ture: the needs and ex­pec­ta­tions of the cus­tomer. So how can we re­frame the con­ver­sa­tion and make it less about opin­ion?

Let data do the talk­ing. The process of val­i­dat­ing dif­fer­ent data can pro­vide dif­fer­ent per­spec­tives mis­un­der­stood by the vast ma­jor­ity of peo­ple. It does not need to be de­void of emo­tion nor does it need to fo­cus strictly on us­abil­ity. What it needs to have is a pur­pose. What kinds of data are you col­lect­ing and why? There are two core types of data to col­lect: 1. Qual­i­ta­tive: emo­tional feed­back Qual­i­ta­tive re­search gathers non­nu­mer­i­cal feed­back from par­tic­i­pants. Think first re­ac­tions or per­sonal opin­ion­based feed­back. What you liked and why, and de­scrip­tions in­stead of num­bers. Qual­i­ta­tive = qual­ity. 2. Quan­ti­ta­tive: sci­en­tific data Quan­ti­ta­tive re­search gathers nu­mer­i­cal feed­back. Per­form this ac­tion and rate the ease of com­plet­ing the ac­tion on a scale of one to ten. This is the ba­sis for sys­tems like Net Pro­mo­tor Score (NPS). Quan­ti­ta­tive = quan­tity. What you need to an­a­lyse should de­ter­mine what data you need to col­lect For ex­am­ple, if you’re tasked with cre­at­ing a base­line for cus­tomer sat­is­fac­tion on mem­ber sign-up or check­out in a shopping cart you’re go­ing to need quan­ti­ta­tive data. This lets you col­lect un­bi­ased num­bers that show a clear pro­gres­sion from where you be­gan to where you ended months or years later. This is cru­cial in show­ing the im­por­tance of in­vest­ing in UX within an or­gan­i­sa­tion.

Many or­gan­i­sa­tions will see the ini­tial im­prove­ment and not un­der­stand the value in retest­ing.

See­ing an in­crease in sign-ups, rev­enue or drop in sup­port re­quests is fan­tas­tic but there are many vari­ables that could in­flu­ence re­sults. At­tri­bu­tion is your friend. It’s also the friend of the de­part­ments that you will be work­ing with to show­case ex­plic­itly that the test­ing per­formed and sub­se­quent changes were val­i­dated.

This goes back to the sci­en­tific val­i­da­tion we dis­cussed ear­lier. Col­lect the data, make the change and val­i­date that the change was ac­cu­rate. If it wasn’t, cre­ate a hy­poth­e­sis as to why it wasn’t and be­gin again. The trick is to al­ways try to prove some­thing wrong.

If you’re re­design­ing a con­sumer fac­ing web­site with­out a long-term UX plan it may be okay to fo­cus on qual­i­ta­tive feed­back: de­scrip­tions and emo­tions. This works well for de­sign-cen­tric UX like land­ing pages for mar­ket­ing or blogs. This does not work well for long-term strat­egy as trends are fluid. What works to­day for a tested de­mo­graphic may not work well next year, so be care­ful.

Qual­i­ta­tive feed­back is harder to dis­til into strat­egy be­cause what users say they want and what they ac­tu­ally want are two com­pletely dif­fer­ent things in most cases. It re­quires a lot of fore­sight into when to peel back the lay­ers of feed­back and dig deeper with fol­low-up ques­tions or fa­cil­i­ta­tion.

With­out the con­text of mo­ti­va­tion, you be­come trapped in a feed­back loop. This tends to lead down the per­cep­tion trap again. If you’re stuck with­out di­rec­tion you will try to find mean­ing in the data by ap­ply­ing bias. Once that hap­pens, you fo­cus on the wrong mean­ing and the data be­comes use­less. How fo­cus­ing on the wrong mean­ing can de­rail a project Let’s take a look at an­other ex­am­ple: ten­ants in a New York of­fice build­ing would com­plain be­cause, in their opin­ion, there was too much time in-be­tween press­ing the but­ton and when the el­e­va­tor would ar­rive, ding and open. Sev­eral ten­ants threat­ened to move out. They wanted a faster el­e­va­tor to solve the prob­lem. This is qual­i­ta­tive feed­back and emo­tional re­sponses. Man­age­ment re­quested a feasibility study to de­ter­mine cost and ef­fec­tive­ness, which means hard num­bers and quan­ti­ta­tive data.

A dif­fer­ent per­spec­tive from some­one in the psy­chol­ogy field fo­cused on the ten­ants’ core needs by dig­ging deeper than their ini­tial feed­back. They ig­nored the nu­meric feed­back of the fi­nan­cial study be­cause it was not cost­ef­fec­tive to re­place the el­e­va­tor and re­build the struc­ture to ac­com­mo­date the ten­ant’s sug­ges­tions.

The psy­chol­o­gist de­ter­mined that find­ing a way to oc­cupy the ten­ants’

An in­crease in sign-ups, rev­enue or drop in sup­port re­quests is fan­tas­tic but there are many vari­ables that could in­flu­ence re­sults

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.