Hi­lary Stephen­son dis­cusses how dig­i­tal de­sign may not be as eth­i­cal as we think

net magazine - - CONTENTS - Stephen­son is man­ag­ing di­rec­tor at Sigma, which spe­cialises in user ex­pe­ri­ence con­sul­tancy, dig­i­tal de­sign and de­vel­op­ment wear­e­

Hi­lary Stephen­son dis­cusses how dig­i­tal de­sign may not be as eth­i­cal as we think

De­sign for dig­i­tal plat­forms has the po­ten­tial to in­flu­ence the lives of bil­lions of peo­ple around the globe. It is, there­fore, our eth­i­cal im­per­a­tive to en­sure that dig­i­tal prod­ucts and ser­vices are de­signed to be as in­clu­sive as pos­si­ble, by en­sur­ing they can be used eas­ily by ev­ery­one. How­ever, de­spite be­ing wellinten­tioned, or­gan­i­sa­tions of all shapes, sizes and sec­tors cur­rently fall short in this re­gard.

Con­sid­er­ing our in­creas­ing re­liance on tech and dig­i­tal ser­vices, a world in which we do not con­sider the eth­i­cal con­se­quences of our de­sign de­ci­sions is a world in which many peo­ple may be dis­crim­i­nated against, ex­cluded and po­ten­tially even harmed by the tech we in­creas­ingly rely upon.

As the say­ing goes: “the road to hell is paved with good in­ten­tions”. Un­for­tu­nately many well-in­ten­tioned de­sign­ers and busi­nesses are in­creas­ingly try­ing to do the right thing but let­ting bad de­sign and de­vel­op­ment ap­proaches get in the way. Take mind­ful­ness apps as an ev­ery­day ex­am­ple. De­signed to pro­mote calm and emo­tional well­be­ing, these apps send the user pe­ri­odic no­ti­fi­ca­tions to en­cour­age them to med­i­tate. But, rather than be­ing a help­ful re­minder, these can cause feel­ings of stress, anx­i­ety and shame in the user – the ex­act op­po­site of their in­tended ef­fect.

A more dis­turb­ing ex­am­ple of un­in­ten­tional un­eth­i­cal de­sign is ma­chine bias. This oc­curs when er­rors are made in ma­chine learn­ing pro­cesses and bear wor­ry­ing sim­i­lar­i­ties to hu­man cog­ni­tive bi­ases. While the study of ma­chine bias is still in its in­fancy, there are al­ready sev­eral prom­i­nent ex­am­ples. Try en­ter­ing the term ‘pro­fes­sional hair’ into Google Im­ages and you’ll see the re­sults are of pre­dom­i­nantly Cau­casian women. Now try en­ter­ing the term ‘un­pro­fes­sional hair’ and see what comes up. Wor­ry­ing, isn’t it? Per­haps even more con­cern­ing is that this bias also ex­tends to the soft­ware used to pro­file and pre­dict fu­ture crim­i­nals. This is tech­nol­ogy that is sup­posed to make us safer but in fact just fur­thers neg­a­tive racial stereo­types. In­ves­tiga­tive jour­nal­ism news­room ProPublica ran a Pulitzer-prize nom­i­nated study on this phe­nom­e­non, which found that the al­go­rithms were not only spec­tac­u­larly un­re­li­able but also bi­ased in favour of white de­fen­dants, falsely flag­ging black de­fen­dants as fu­ture crim­i­nals at al­most twice the rate of white de­fen­dants.

Un­for­tu­nately, not all bad de­sign de­ci­sions oc­cur by ac­ci­dent. De­sign can be an in­cred­i­bly pow­er­ful means to in­flu­ence user be­hav­iour and many brands have wo­ken up to this fact.

A wor­ry­ing trend we’re see­ing from some of the big­ger brands is dark UX pat­terns. These are (for lack of a bet­ter term) ‘psy­cho­log­i­cal tricks’ brands de­ploy to en­cour­age users to give up their money, data, or even sim­ply to stay on their site longer than they oth­er­wise would.

This is­sue is preva­lent across all sec­tors, but par­tic­u­larly in re­tail, leisure and travel, where there is a clear fi­nan­cial in­cen­tive to keep users en­gaged and steer them to­wards pur­chas­ing cer­tain prod­ucts. To use a com­mon ex­am­ple, Ama­zon tempts cus­tomers into sign­ing up for a free Prime trial by us­ing a bright yel­low ‘FREE one-day de­liv­ery’ but­ton, while grey­ing out the sim­ple ‘Pro­ceed to check­out’ but­ton. Cus­tomers’ eyes are drawn to the colour­ful op­tion, which doesn’t clearly stip­u­late that a sub­scrip­tion charge will be taken monthly as soon as the 30-day trial pe­riod is over.

To change the pat­tern of un­eth­i­cal de­sign, we must look be­yond sim­ple in­puts, pro­cesses and out­puts. It’s time for us to start think­ing more long-term.

Ev­ery out­put has an out­come. This out­come will have a longer-term im­pact we need to be­gin tak­ing into con­sid­er­a­tion if we are to be­come eth­i­cal, hu­man-cen­tric de­sign­ers and thus en­sure that no­body is left be­hind or harmed by the tech­nol­ogy that we de­sign and build.

Re­mem­ber, harm does not have to be some­thing you do in­ten­tion­ally – some­times it’s just the ab­sence of good. If we have the op­por­tu­nity to de­sign more eth­i­cally and opt not to take it, we are con­tribut­ing to harm. It’s time for this to change.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.