Friend that uses you

Mail & Guardian - - News -

frowns, re­cent ar­ti­fi­cial in­tel­li­gence de­vel­op­ments are analysing sub­tle and nearly im­pos­si­ble to sup­press mi­cro-ex­pres­sions that last only a frac­tion of a sec­ond. Be­cause mi­croex­pres­sions can re­veal emo­tions that peo­ple may be try­ing to hide, recog­nis­ing mi­cro-ex­pres­sions can be ad­van­ta­geous for in­tel­li­gence agen­cies by pro­vid­ing clues to pre­dict danger­ous sit­u­a­tions. Or it could be used by Face­book and ne­far­i­ous gov­ern­ments to ma­nip­u­late mil­lions of peo­ple.

Re­gard­less of the ap­pli­ca­tion, the re­sults would be a to­tal loss of au­ton­omy. Peo­ple’s philo­soph­i­cal views on the de­grees of in­di­vid­ual free­dom or agency might not align, but surely, if any­thing, none of us wishes to be ma­nip­u­lated.

Face­book’s rise to the top was by no means ac­ci­den­tal. Its un­prece­dented size, de­mog­ra­phy and the ease with which we can trans­mit in­for­ma­tion al­low the plat­form to run so­cial ex­per­i­ments on its users. Over the years, the so­cial net­work has re­fined its choice ar­chi­tec­ture to hi­jack users’ psy­cho­log­i­cal vul­ner­a­bil­i­ties with de­cep­tive de­sign and psy­cho­log­i­cal nudges.

“Dark” de­sign pat­terns are crafted to steer users away from data pro­tec­tion and to­wards spend­ing more time on­line. The colour, size and word­ing of in­ter­faces all con­trib­ute to giv­ing users the il­lu­sion of con­trol. Ac­cept­ing data col­lec­tion is a sin­gle click fa­cil­i­tated by bright colours, with big but­tons and sim­ple text. Man­ag­ing your data, in con­trast, is a mul­ti­step process de­signed to over­whelm users with gran­u­lar choices and word­ing that sug­gests a loss of ac­count func­tion­al­ity or dele­tion if users tam­per with the de­fault set­tings. The phe­nom­e­non even has a name: the “con­trol para­dox”.

Here’s Face­book’s pitch for its in­tru­sive face-recog­ni­tion fea­ture: it “lets us know when you’re in other pho­tos or videos so that we can cre­ate a bet­ter ex­pe­ri­ence”. By fram­ing the use of face recog­ni­tion in a solely pos­i­tive man­ner, de­lib­er­ately leav­ing out any pos­si­ble neg­a­tive con­se­quences, Face­book nudged users to­ward en­abling the op­tion without fully in­form­ing them.

Dark pat­terns are de­scribed in the De­ceived by De­sign re­port as “eth­i­cally prob­lem­atic, be­cause they mis­lead users into mak­ing choices that are not in their in­ter­est and de­prive them of their agency”.

The con­trol para­dox is by no means the only psy­cho­log­i­cal quirk for the so­cial net­work to ex­ploit. The fields of be­havioural econ­omy and psy­chol­ogy de­scribe how users’ de­ci­sion-mak­ing and be­hav­iour could be in­flu­enced by ap­peal­ing to their psy­cho­log­i­cal bi­ases. Stud­ies found that in­di­vid­u­als over­es­ti­mate their abil­ity to make unadul­ter­ated de­ci­sions.

It’s some­what more com­mon for in­di­vid­u­als to be in a con­stant flux be­tween states of ra­tio­nal­ity and cog­ni­tive fal­li­bil­ity. But most of us be­lieve we are more ra­tio­nal than the av­er­age in­di­vid­ual, fit­tingly en­dors­ing the Dun­ning-kruger ef­fect that most peo­ple over­es­ti­mate their abil­i­ties.

For ex­am­ple, in­di­vid­u­als cre­ate tem­po­rary pref­er­ences for small re­wards that oc­cur sooner rather than for more sub­stan­tial long-term gains, and pre­fer choices and in­for­ma­tion that con­firm our pre-ex­ist­ing be­liefs. Face­book ex­ploits th­ese and other hu­man ten­den­cies and trig­gers such as so­cial ap­proval, the need to be­long, the fear

of miss­ing out, in­ter­mit­tent vari­able re­wards, re­cip­ro­cal ex­pec­ta­tions and other bi­o­log­i­cal vul­ner­a­bil­i­ties to keep users hooked on the plat­form.

Sandy Parak­i­las, a for­mer Face­book op­er­a­tions man­ager, says the com­pany is gen­er­at­ing eco­nomic value by us­ing data about you “to pre­dict how you’re go­ing to act and ma­nip­u­late you”.

Jen­nifer King, the di­rec­tor of con­sumer pri­vacy at the Cen­tre for In­ter­net and So­ci­ety at the Stan­ford Law School, echoed a sim­i­lar view. “As long as Face­book keeps col­lect­ing per­sonal in­for­ma­tion, we should be wary that it could be used for pur­poses more in­sid­i­ous than tar­geted ad­ver­tis­ing, in­clud­ing sway­ing elec­tions or ma­nip­u­lat­ing users’ emo­tions,” she told The New York Times.

If the neo-lud­dite tone of this ar­ti­cle ap­pears sim­plis­tic, you’re right.

Face­book’s al­go­rithms are op­ti­mised to ex­ploit what tra­di­tional me­dia has done for cen­turies. Its adsup­ported busi­ness model com­petes for our fi­nite at­ten­tion by op­ti­mis­ing neg­a­tive emo­tions such as out­rage and hate in a zero-sum race to the bot­tom. The say­ing goes: “If it bleeds, it leads.”

Even if users are in­ter­ested in a broad range of news from dif­fer­ent po­lit­i­cal pref­er­ences, Face­book’s al­go­rithms will favour ar­ti­cles that con­firm po­lit­i­cal prej­u­dices. The fact of the mat­ter is that neg­a­tive emo­tions are more ac­ces­si­ble and there­fore more cost-ef­fec­tive.

How then is Face­book any dif­fer­ent from tra­di­tional me­dia or other tech­nol­ogy com­pa­nies?

Be­sides the busi­ness model that un­der­pins the com­pany’s ev­ery de­ci­sion, it is also the most pow­er­ful com­mu­ni­ca­tions and me­dia com­pany in the world ac­cord­ing to ev­ery avail­able mea­sure. Robyn Ca­plan, a re­search an­a­lyst at Data & So­ci­ety, points out that Face­book has no ri­val in size, pop­u­lar­ity and func­tion­al­ity. When Face­book in­tro­duced its News Feed in 2006, it blind­sided its users by de­part­ing from con­nect­ing friends to con­trol­ling what friends see.

Fast for­ward 10 years and it’s more ev­i­dent than ever. News Feed’s fil­tered stream of so­cial con­tent has cap­tured the mar­ket and has emerged as the most sig­nif­i­cant dis­trib­u­tor of news in the world.

Ar­guably, none of this is un­usual. Tra­di­tional main­stream me­dia also has con­sid­er­able in­flu­ence; how­ever, the dif­fer­ences are sig­nif­i­cant. In ad­di­tion to be­ing con­strained by rig­or­ous in­dus­try rules and norms, com­pe­ti­tion in the main­stream press al­lows for con­tent to be com­par­a­tively as­sessed across dif­fer­ent news out­lets for pos­si­ble bias. Po­ten­tial prejudice is negated by reg­u­la­tions that limit the power, reach and own­er­ship of any sin­gle out­let, thus safe­guard­ing the di­ver­sity of con­tent.

The per­son­al­i­sa­tion of Face­book’s News Feed makes th­ese comparative stud­ies nearly im­pos­si­ble. Even if you could es­tab­lish an in­for­ma­tion pat­tern for Face­book’s users, what would you com­pare it to?

Zucker­berg’s puz­zling tes­ti­mony be­fore the United State’s House of Rep­re­sen­ta­tives and Se­nate at­tempts to dis­miss the mo­nop­oly la­bel. He said: “Con­sumers have lots of choices over how they spend their time.”

By this logic, re­porter Paul Blu­men­thal notes that “Face­book can never be a mo­nop­oly in Zucker­berg’s eyes be­cause its com­pe­ti­tion is ev­ery other form of hu­man ac­tiv­ity” and by this mea­sure its “big­gest com­peti­tors are work and sleep”.

Tech­nol­ogy com­pa­nies have man­aged to con­vince peo­ple that al­go­rithms pro­duce some kind of data-mined ob­jec­tive truth unadul­ter­ated by hu­man fal­li­bil­ity. This is not the case; hu­mans are in­volved in ev­ery step of the process. From the ini­tial train­ing data pro­vided by hu­mans, de­sign­ing the mod­els, analysing and tweak­ing the re­sults and so forth, all th­ese bound­ary con­di­tions are set by hu­mans, whose con­scious and un­con­scious bi­ases may be ex­pressed in the re­sults.

Pos­si­ble malfea­sance or cor­rup­tion aside, the chal­lenges for the “many good peo­ple work­ing there” at Face­book to cre­ate a plat­form that re­sem­bles neu­tral­ity is dif­fi­cult. Al­go­rithms are be­ing trained on our past be­hav­iours to pre­dict our fu­ture. By def­i­ni­tion, the past is not the fu­ture. So­ci­eties and in­di­vid­u­als are con­stantly chang­ing, there­fore train­ing data needs to rep­re­sent the cur­rent pop­u­la­tion and ac­count for grad­ual so­cial drifts in the fu­ture. We have to ac­knowl­edge that even well-de­signed al­go­rithms could have pro­found im­pli­ca­tions for so­ci­ety. An al­go­rithm that is de­signed to con­nect like-minded peo­ple will at the same time iso­late them.

The real­ity is that the ma­jor­ity of peo­ple are far less likely to en­gage with view­points that chal­lenge their pre­con­ceived views even in the ab­sence of so­cial me­dia. If the po­lar­i­sa­tion of com­mu­ni­ties is a prod­uct of our bi­ol­ogy, then per­haps so­cial me­dia com­pa­nies’ neu­tral plat­form de­fence of merely track­ing user pref­er­ence and con­nect­ing like­minded peo­ple is cred­i­ble.

But al­go­rithms are not pas­sively mon­i­tor­ing users’ pref­er­ences; they ac­tively steer be­hav­iour and thoughts. Mea­sured on­line con­ver­sa­tions are ex­pe­dited to the fringes and drowned out by rad­i­calised views fo­mented by un­sub­stan­ti­ated ru­mours, mis­trust and para­noia. Echo cham­bers are not a re­sult of free as­so­ci­a­tion based on the false premise of plat­form neu­tral­ity; it’s the re­sult of op­ti­mis­ing out­rage for profit. Face­book knows that out­raged users are en­gaged users. Dig­i­tal mis­in­for­ma­tion has be­come so per­va­sive on­line that the World Eco­nomic Fo­rum has clas­si­fied it as one of the big­gest threats to our so­ci­ety.

Un­for­tu­nately, be­cause of the mis­match be­tween the speed of tech­no­log­i­cal de­vel­op­ment and the grad­ual grind of ac­count­abil­ity, whether it’s morals, ethics or laws, it is pos­si­ble for tech­nol­ogy com­pa­nies to ex­ploit the tech­nol­ogy land­scape unchecked. Reg­u­la­tion can­not keep up with the speed of in­ven­tion and, when it does catch up, com­pa­nies find ways to cir­cum­vent new laws.

A case in point: it took years for the gov­ern­ment and the pub­lic to be­gin to un­der­stand that Face­book was min­ing vast datasets of users. Face­book’s motto, “move fast and break things”, at­tests to an at­ti­tude of ar­ro­gant care­less­ness. It is quite happy to ask for for­give­ness rather than per­mis­sion.

Face­book has in­di­cated some will­ing­ness to change by ad­just­ing the News Feed al­go­rithm to ad­dress th­ese is­sues and pri­ori­tise posts from friends and fam­ily over vi­ral videos, news and other con­tent. Zucker­berg an­nounced a sig­nif­i­cant over­haul of Face­book’s News Feed al­go­rithm that would pri­ori­tise “mean­ing­ful so­cial in­ter­ac­tions” over “rel­e­vant con­tent” af­ter pledg­ing to spend 2018 “mak­ing sure that time spent on Face­book is time well spent”.

Per­haps th­ese changes should carry more weight, or does it de­serve the equiv­a­lent blip of at­ten­tion Zucker­berg has given them. I’m not buy­ing what you are sell­ing. Sam Lester, a con­sumer pri­vacy fel­low at the Elec­tronic Pri­vacy In­for­ma­tion Cen­tre, points out that we are “look­ing to the com­pany that caused th­ese prob­lems to fix them”.

Face­book can­not close the Pan­dora’s box it opened a decade ago, al­low­ing ex­ter­nal apps to col­lect user data in­dis­crim­i­nately. The pub­lic may never know the ex­tent to which th­ese com­pa­nies have copied and shared their per­sonal in­for­ma­tion with po­ten­tially ne­far­i­ous and de­struc­tive forces. We should not con­flate our un­der­stand­ing of the nat­u­ral world with its dig­i­tal coun­ter­part. Delet­ing in­for­ma­tion on­line is not equiv­a­lent to burn­ing a note.

Our pub­lic un­der­stand­ing of hu­man­ity’s po­ten­tial for change has pro­duced laws that hon­our re­demp­tion in the real world by clear­ing a per­son’s record af­ter a fixed pe­riod of time. The right to be for­got­ten, how­ever, does not per­tain to de­cen­tralised dig­i­tal in­for­ma­tion that could eas­ily be shared and stored on mil­lions of de­vices. Our choices to­day will af­fect the rest of our lives and those of the next gen­er­a­tion.

We are like naive chil­dren hid­ing be­hind our hands, and the grown-ups are per­fectly con­tent to play along.

So­cial me­dia it­self isn’t go­ing away. It has be­come an in­te­gral part of our lives, sat­is­fy­ing a ba­sic hu­man need to con­nect and share in­for­ma­tion. Yes, we have come to de­pend on so­cial net­works, but should we ac­cept our virtual makeshift com­mu­nity? Are we des­tined to wan­der the virtual iden­ti­ties of friends and pseudo-friends, pro­ject­ing ide­alised ver­sions of them­selves, mak­ing us feel in­ad­e­quate and medi­ocre? The al­ready mud­died wa­ter be­tween fic­tion and real­ity will be­come even more am­bigu­ous in the fu­ture when ar­ti­fi­cial in­tel­li­genceen­hanced video and au­dio forg­eries be­come com­mon­place.

The hu­man mind is in­cred­i­bly sus­cep­ti­ble to form­ing false mem­o­ries. This ten­dency will only ex­as­per­ate with ar­ti­fi­cial in­tel­li­gence-en­hanced forg­eries on the in­ter­net, where false ideas spread like viruses among like­minded peo­ple.

A big part of the dan­ger of this tech­nol­ogy is that, un­like older photo and video edit­ing tech­niques, it will be more widely ac­ces­si­ble to peo­ple without great tech­ni­cal skill. “I’m more wor­ried about what this does to authen­tic con­tent,” said Hany Farid, a pro­fes­sor of com­puter science at Dart­mouth Col­lege. “Think about Don­ald Trump. If that au­dio record­ing of him say­ing he grabbed a woman was re­leased to­day, he would have plau­si­ble de­ni­a­bil­ity.”

You should delete Face­book for the rea­sons al­ready men­tioned, but you wouldn’t be­cause of them. We al­ready ran this experiment. No sooner than #Delete­face­book went vi­ral ear­lier last year, droves of users signed back up. Users have come to rely on the plat­form to so­cialise, or­gan­ise, pro­cras­ti­nate and hide be­hind virtual iden­ti­ties.

Con­ceiv­ably the most com­mon rea­son in­di­vid­u­als joined Face­book in the first place is to con­nect with friends. Without any so­cial en­gi­neer­ing on its part, ini­tially at least, Face­book was able to con­vince users to share per­sonal in­for­ma­tion by con­nect­ing friends who trust each other.

Shar­ing in­for­ma­tion on­line is not novel, but cre­at­ing an en­vi­ron­ment to share per­son­ally iden­ti­fi­able in­for­ma­tion is. Per­haps users do not trust Face­book in­her­ently, but they in­her­ently trust their friends, and by prox­im­ity con­flate the two. If the word “trust” raises more ques­tions than an­swers, please sub­sti­tute with fa­mil­iar­ity. You joined Face­book be­cause it feels fa­mil­iar and you stay or come back for the same rea­son.

Hu­mans are in­trin­si­cally so­cial an­i­mals and seek out in­ti­macy, whether we want to or not. “A con­sid­er­able part of Face­book’s ap­peal stems from its mirac­u­lous fu­sion of dis­tance with in­ti­macy, or the il­lu­sion of dis­tance with the il­lu­sion of in­ti­macy.” — Stephen Marche wrote in The At­lantic on­line mag­a­zine.

An­to­nio Gar­cía Martínez, au­thor and tech engi­neer who for­merly worked at Face­book, elab­o­rates on the il­lu­sion, de­scrib­ing Face­book as a cheap dig­i­tal knock-off; Face­book is to real com­mu­nity what porn is to real sex. “Un­for­tu­nately, in both in­stances use of the sim­u­lacrum fries your brain in ways that pre­vent you from ever ex­pe­ri­enc­ing the real ver­sion again.”

In­di­vid­u­als are al­ways go­ing to be at a dis­ad­van­tage given the in­for­ma­tion asym­me­try that ex­ists be­tween Face­book and its users. Tim Wu, the au­thor of The At­ten­tion Mer­chants, out­lines a po­ten­tial path for­wards. “What we most need now is a new gen­er­a­tion of so­cial me­dia plat­forms that are fun­da­men­tally dif­fer­ent in their in­cen­tives and ded­i­ca­tion to pro­tect­ing user data.”

The French cul­tural the­o­rist Paul Vir­ilio, best known for his writ­ings about tech­nol­ogy, ap­pro­pri­ately stated that “the in­ven­tion of the ship was also the in­ven­tion of the ship­wreck”, de­scrib­ing the in­evitable cost that is as­so­ci­ated with progress.

His elo­quent ex­pla­na­tion of ca­su­alty per­me­ates al­most ev­ery as­pect of Face­book. Face­book, and so­cial net­works like it, will in­deed pro­vide the makeshift com­mu­nity for those whose worlds are be­ing de­stroyed around them, and at the same time pro­vide a mega­phone for the de­stroy­ers.

The sug­ges­tion is that we are deal­ing with an im­mov­able force. It’s surely true con­sid­er­ing Face­book’s so­cial me­dia mo­nop­oly and power to in­flu­ence bil­lions of peo­ple daily. Be­liev­ing that we are some­how im­mune to the plat­form’s psy­cho­log­i­cal nudges is naive, and the sooner we ac­cept its ab­so­lute power, the sooner we can choose to move on.

Face­book may be one of the first so­cial me­dia com­pa­nies to emerge along­side the in­ter­net; it need not be the last. Face­book is the sum of its users; you are Face­book, and you could also choose not to be it. The crit­i­cal mass of users that en­sured the rapid net­work ef­fect can also be a pow­er­ful driv­ing force in the op­po­site di­rec­tion.

Baratunde Thurston, an ad­viser at Data & So­ci­ety, says: “Since com­pa­nies value us col­lec­tively, we must re­store bal­ance with a collective re­sponse that is based on the view that we’re in this to­gether ; that our rights and re­spon­si­bil­i­ties are shared.”

Let’s move

Face­book.

fast and

break

Pi­eter Hen­ning is an artist and de­signer who lives and works in Cape Town. Fol­low him on Twit­ter @P_d_hen­ning

Huge con­cern: Face­book, not sur­pris­ingly, but also not ex­clu­sively, al­ready works with third-party data bro­kers to merge users’ on­line ac­tiv­ity and pro­files with off­line be­hav­iour. Photo: Frank Ho­er­mann/afp

No likes: The group ‘Rag­ing Grannies’ called for bet­ter con­sumer pro­tec­tion and on­line pri­vacy in the wake of the Cam­bridge An­a­lyt­ica’s ac­cess to users’ data. Photo: Justin Sul­li­van/getty Im­ages/afp

Newspapers in English

Newspapers from South Africa

© PressReader. All rights reserved.