‘Deep­fakes’ are rea­sons to de­spair about dig­i­tal fu­ture

The Progress-Index - - OPINION -

Ade­spair­ing pre­dic­tion for the dig­i­tal fu­ture came from an un­likely source re­cently. Speak­ing of “deep­fakes,” or me­dia ma­nip­u­lated through ar­ti­fi­cial in­tel­li­gence, the ac­tress Scar­lett Jo­hans­son told The Washington Post that “the in­ter­net is a vast worm­hole of dark­ness that eats it­self.”

A stark view, no doubt, but when it comes to deep­fakes, it may not be en­tirely un­mer­ited. The abil­ity to use ma­chine learn­ing to sim­u­late an in­di­vid­ual say­ing or do­ing al­most any­thing poses per­sonal and po­lit­i­cal risks that so­ci­eties around the world are ill-equipped to guard against.

Jo­hans­son’s com­ments ap­peared in a re­port in The Post about how in­di­vid­u­als’ faces, and celebri­ties’ faces in par­tic­u­lar, are grafted onto porno­graphic videos and passed around the web - some­times to black­mail, some­times just to hu­mil­i­ate. But deep­fakes could also have ap­pli­ca­tions in in­for­ma­tion war­fare. A for­eign ad­ver­sary hop­ing to in­flu­ence an elec­tion could plant a doc­tored clip of a politi­cian com­mit­ting a gaffe. Con­vinc­ingly edited video could con­fuse mil­i­tary of­fi­cers in the field. The en­su­ing un­cer­tainty could also be ex­ploited to un­der­mine jour­nal­is­tic cred­i­bil­ity; to­mor­row’s deep­fake may be to­day’s “fake news.”

Per­haps the scari­est part of these Franken­stein-ish cre­ations is how easy they are to make, es­pe­cially when the soft­ware for a spe­cific ap­pli­ca­tion - such as pornog­ra­phy - is pub­licly avail­able. A lay­man can sim­ply plug suf­fi­cient pho­tos or footage into prewrit­ten code and pro­duce a life­like lie about his or her sub­ject. Deep­fak­ery is de­moc­ra­tiz­ing, and ma­li­cious ac­tors, how­ever un­so­phis­ti­cated, are in­creas­ingly able to har­ness it.

Deep­fakes are also in­her­ently hard to de­tect. The tech­nol­ogy used to cre­ate them is trained in part with the same al­go­rithms that dis­tin­guish fake con­tent from real - so any strides in fer­ret­ing out false con­tent will soon be weaponized to make that con­tent more con­vinc­ing. This means on­line plat­forms have their po­lice work cut out for them, though in­vest­ment in stay­ing one step ahead, along with al­go­rith­mic tweaks to de­mote un­trust­wor­thy sources and de-em­pha­size vi­ral­ity, will al­ways be needed. Some sug­gest hold­ing sites li­able for the dam­ages caused by deep­fakes if com­pa­nies do too lit­tle to re­move dan­ger­ous con­tent.

Like tech­ni­cal so­lu­tions, pol­icy an­swers to the deep­fake prob­lem are elu­sive, but steps can be taken. Many harm­ful deep­fakes are al­ready il­le­gal un­der copy­right, defama­tion and other laws, but Congress should tweak ex­ist­ing fraud-re­lated reg­u­la­tions to cover the tech­nol­ogy ex­plic­itly - amp­ing up penal­ties and bring­ing fed­eral re­sources, as well as pub­lic at­ten­tion, to bear on a dev­il­ish prob­lem. Hu­mans have so far hardly had to think about what hap­pens when some­one else uses our faces. To avoid that worm­hole of dark­ness, we will have start think­ing hard.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.