Some­thing’s quite wrong on the In­ter­net

The Timaru Herald - - TECHNOLOGY&SCIENCE - CHRIS­TINE EMBA

Some­thing is wrong on the in­ter­net," declares an es­say trend­ing in tech cir­cles. But the is­sue isn’t Rus­sian ads or Twit­ter ha­rassers. It’s chil­dren’s videos.

The piece, by tech writer James Bri­dle, was pub­lished on the heels of a re­port from the New York Times that de­scribed dis­qui­et­ing prob­lems with the pop­u­lar YouTube Kids app. Par­ents have been hand­ing their chil­dren an iPad to watch videos of Peppa Pig or Elsa from "Frozen," only for the sup­pos­edly fam­ily-friendly plat­form to of­fer up some dis­turb­ing ver­sions of the same. In clips cam­ou­flaged among more be­nign videos, Peppa drinks bleach in­stead of nam­ing veg­eta­bles. Elsa might ap­pear as a gore-cov­ered zom­bie or even in a sex­u­ally com­pro­mis­ing po­si­tion with Spi­der­Man.

The phe­nom­e­non is alarm­ing, to say the least, and YouTube has said that it’s in the process of im­ple­ment­ing new fil­ter­ing meth­ods. But the source of the prob­lem will re­main. In fact, it’s the site’s most im­por­tant tool - and in­creas­ingly, ours.

YouTube sug­gests search re­sults and "up next" videos us­ing pro­pri­etary al­go­rithms: com­puter pro­grams that, based on a par­tic­u­lar set of guide­lines and trained on vast sets of user data, de­ter­mine what con­tent to rec­om­mend or to hide from a par­tic­u­lar user. They work well enough - the com­pany claims that in the past 30 days, only 0.005 per­cent of YouTube Kids videos have been flagged as in­ap­pro­pri­ate. But as th­ese lat­est re­ports show, no piece of code is per­fect.

Sim­i­lar al­go­rithms serve as the en­gine be­hind al­most all of the most suc­cess­ful tech com­pa­nies, pow­er­ing ev­ery­thing from Face­book’s news feed to Google’s search re­sults (Google, in­ci­den- tally, is the par­ent com­pany of YouTube). Nat­u­rally, th­ese mys­te­ri­ous tools have be­come con­ve­nient scape­goats for many of the con­tent prob­lems we face to­day, from bizarre videos aimed at vul­ner­a­ble chil­dren to mis­in­for­ma­tion in news feeds dur­ing the 2016 elec­tion.

Clearly, Sil­i­con Val­ley has some work to do. But in ad­di­tion to de­mand­ing more ac­count­abil­ity from com­pa­nies af­ter their tools go awry, we should de­mand more re­spon­si­bil­ity from our­selves. We need to think about whether we want to re­duce our own re­liance on cor­po­rate al­go­rithms, and if so, how.

As the In­ter­net has be­come an ever-larger part of our lives, we’ve come to rely on th­ese pro­pri­etary bits of code as short­cuts for or­ga­niz­ing the world. Al­go­rithms sort through in­for­ma­tion and make de­ci­sions for us when we don’t have the ca­pa­bil­ity (or per­haps just the en­ergy) to do it our­selves. Need to dis­tract the kids? Send ’em to the wildly ed­u­ca­tional world of YouTube. The app will pick out the safe videos - prob­a­bly. The mech­a­nism may be skewed by profit mo­tives, bi­ased by its data sets or just gen­er­ally in­scrutable, but is that any rea­son to give it up?

Why aren’t we more alarmed by this? Maybe be­cause we’ve al­ways used de­ci­sion-mak­ing short­cuts, and they’ve al­ways had flaws. How would we have cho­sen a chil­dren’s video be­fore YouTube? Per­haps we’d act on a rec­om­men­da­tion from a li­brar­ian, or a peer group, or even a Na­tional Le­gion of De­cency list. Th­ese sources, too, were in­su­lar, sub­ject to per­sonal bi­ases and lim­ited in scope.

Still, there were mean­ing­ful dif­fer­ences be­tween those old­school short­cuts and to­day’s ma­chine-learn­ing al­go­rithms. The for­mer had at least some over­sight and reg­u­la­tion; it’s un­likely that a pub­lic li­brary would lend out nurs­ery rhyme snuff films. Shared com­mu­nity val­ues made it clear which choices were be­ing fa­vored, and why. And hu­man judg­ment - to­day al­most quaint - oc­ca­sion­ally al­lowed for serendip­ity in a pos­i­tive di­rec­tion. One might come across a re­source not care­fully cal­i­brated to agree only with one’s stated pref­er­ences, and be the bet­ter for it.

Is there any way to steer our cur­rent al­go­rith­mic regime in that more hu­man di­rec­tion? It’s not clear how. Some law­mak­ers have sug­gested that com­pa­nies re­lease their al­go­rithms for pub­lic re­view; oth­ers pro­pose reg­u­lat­ing cor­po­rate al­go­rithms. For now, the les­son for ev­ery­day users may just be an ur­gent need for in­creased aware­ness, a re­minder that maybe we shouldn’t place all of our trust in a de­ci­sion­mak­ing func­tion that we don’t fully un­der­stand. Fright­en­ing chil­dren’s videos are, among other things, a wake-up call. If there’s some­thing wrong on the In­ter­net, we should do more than just watch.

Newspapers in English

Newspapers from New Zealand

© PressReader. All rights reserved.