HOW DAN­GER­OUS ARE DEEPFAKES?

Ma­nip­u­lated videos hold the po­ten­tial to start wars and swing elec­tions, write El­lie Zolfaghar­i­fard and Lau­rence Dodds in San Fran­cisco

The Daily Telegraph - Business - - Front Page -

Pres­i­dent Don­ald Trump straight­ens his tie, glares into the cam­era and takes a deep breath. “We will strike back against Rus­sia with our full mil­i­tary force,” he says slowly, puff­ing out his chest. “As of to­day, we are at war.” Al­most in­stantly, the video is shared on thou­sands of Twit­ter feeds, What­sApp groups and Face­book pages, caus­ing mass panic and con­fu­sion.

Within min­utes, it is outed as a deep­fake: an AI-gen­er­ated clip cre­ated by a group of hack­ers who have also in­fil­trated Amer­ica’s power net­works to cause chaos in schools, hos­pi­tals and on roads. But it’s too late. By now, mil­lions have heard the news that Trump is wag­ing war fol­low­ing at­tacks on US crit­i­cal in­fra­struc­ture.

This may seem like an out­landish sce­nario, but it’s what ex­perts fear could hap­pen if the tech­nol­ogy be­hind deepfakes is used for ne­far­i­ous pur­poses.

“A video like that, even if it was fake, could go vi­ral within sec­onds,” says Nina Schick, au­thor of Deep Fakes and the In­fo­ca­lypse. “Such a video can do an im­mense amount of dam­age. There’s no ques­tion about it. If Rus­sia wanted to cre­ate a con­vinc­ing deep­fake video of Trump say­ing he’s at war, they could do it right now.”

Un­til re­cently, the ma­nip­u­la­tion of dig­i­tal me­dia to show deepfakes was mostly con­fined to aca­demic re­search labs and to the ever-in­no­va­tive world of on­line pornog­ra­phy. There were also eye-catch­ing stunts de­signed to demon­strate the po­ten­tial for harm, such as Get Out di­rec­tor Jor­dan Peele’s mem­o­rable 2018 im­i­ta­tion of Barack Obama. Back then, the risk was only the­o­ret­i­cal. Now, how­ever, deepfakes are loose – and al­ready cre­at­ing chaos.

While they have yet to start a global con­flict, AI-gen­er­ated videos, faces and voices have caused po­lit­i­cal scan­dal in

Malaysia, swin­dled large sums of money from cor­po­rate ex­ec­u­tives and helped trig­ger an at­tempted coup in Gabon. “Tech­nol­ogy has al­lowed for in­for­ma­tion op­er­a­tions to be­come far more po­tent,” says Schick. “Un­til now, the bar­rier to en­try when it came to ma­nip­u­la­tion in film has been rel­a­tively high. AI has changed that.”

In the first six months of this year, deep­fake de­tec­tion firm Deep Trace Lab said the num­ber of ma­nip­u­lated videos it was spot­ting in the wild had dou­bled. Only last month, Face­book an­nounced that it had shut down a new at­tempt by Rus­sia’s in­fa­mous In­ter­net Re­search Agency to med­dle in US and UK pol­i­tics via a rad­i­cal news web­site called PeaceData. Its “edi­tors” ap­peared to be static deepfakes that used AI-gen­er­ated pho­tos.

“AI-gen­er­ated faces are get­ting more com­mon in dis­in­for­ma­tion op­er­a­tions, and I sus­pect they’ll keep on com­ing,” says Ben Nimmo, head of in­ves­ti­ga­tions at Graphika, who helped un­cover the

Rus­sian net­work.

Deep­fake pic­tures are even eas­ier to cre­ate than videos; Daily Tele­graph read­ers can make their own at ThisPer­son­DoesNotEx­ist.com. Yet they are still ef­fec­tive (and creepy) be­cause, un­like stock pho­tos, they have no prior ex­is­tence, mak­ing them just as unique as any hu­man face.

Sim­i­lar pho­tos have been used by a fake LinkedIn pro­file that be­friended Washington DC in­sid­ers, po­ten­tially as part of a for­eign spy­ing cam­paign by a net­work of fake Face­book ac­counts al­legedly run by the Epoch Times, an on­line news com­pany with links to the Chi­nese Falun Gong sect. Mean­while, deepfakes are pros­per­ing as com­mer­cial tools, with sev­eral firms hawk­ing binders full of AI-gen­er­ated faces that can add in­stant racial or gen­der di­ver­sity to cor­po­rate brochures and ad­verts.

Strangest of all, they have be­gun a com­mon joke for­mat for Gen­er­a­tion Z. Friv­o­lous deepfakes have ex­ploded on Tik­Tok, let­ting video cre­ators aug­ment their im­pres­sions of Jim Car­rey or Al Pa­cino’s per­for­mance in Scar­face. “For as lit­tle as $20 [£15], you can use an on­line mar­ket place to get some­body to make any deep­fake video for you, and we’re start­ing to see more YouTu­bers who are us­ing soft­ware that’s freely avail­able and open source to make their own ma­nip­u­lated videos,” says Schick. Last month, Philip Tully, a data sci­en­tist at se­cu­rity com­pany FireEye, gen­er­ated a hoax Tom Hanks image that looked al­most ex­actly like the real thing. All it took was a few hun­dred im­ages of Hanks and less than £75 spent on on­line face­gen­er­a­tion soft­ware.

Ex­perts de­scribe such ef­forts as “cheap fakes”: me­dia that has been al­tered with­out ad­vanced AI. “They can still be harm­ful,” says Vic­tor Ri­par­belli, chief ex­ec­u­tive of Lon­don­based Syn­the­sia, one of the world’s most ad­vanced deep­fake com­pa­nies. His team is work­ing with busi­nesses such as WPP to cre­ate cor­po­rate train­ing videos for their global branches. The videos use deep­fake tech­nol­ogy to al­low the pre­sen­ter to speak in any lan­guage and ad­dress the viewer by name.

Any­one can try the tech­nol­ogy for them­selves by typ­ing a script for a vir­tual pre­sen­ter to read. The re­sults can be un­nerv­ing. Ri­par­belli says his main com­peti­tors are ma­jor tech com­pa­nies. Tik­Tok’s par­ent com­pany, ByteDance, for in­stance, has de­vel­oped its own un­re­leased deep­fake gen­er­a­tor called Face Swap, some of which still ex­isted in Tik­Tok’s code at the start of 2020. The likes of Snapchat have cre­ated sim­i­lar fea­tures, al­beit more lim­ited. Start-ups, such as Ukraine’s Re­faceAI, are catch­ing up. Its Re­face app uses some­thing known as gen­er­a­tive ad­ver­sar­ial net­works to pit two neu­ral net­works against each other, cre­at­ing a process which end­lessly cor­rects and re­fines it­self.

“It’s naive to think that such tech­nolo­gies by pri­vate com­pa­nies won’t be used for ma­lign pur­poses,” says Schick. “It can be used for good, such as in com­mer­cial ap­pli­ca­tions, but it ab­so­lutely will be weaponised.”

Ri­par­belli says deepfakes will in­evitably fall into the hands of crim­i­nals, but fully re­al­is­tic ones are still a long way off – and that may be one way to fight against their rise.

“There’s quite a lot of tech­ni­cal bar­ri­ers to change what some­one says in the video. One is the voice; cloning it is still re­ally, re­ally dif­fi­cult to do. If I change the speech in a video that’s

‘If Rus­sia wanted to cre­ate a con­vinc­ing deep­fake video of Trump say­ing he’s at war, they could do it right now’

‘It’s naive to think that such tech­nolo­gies by pri­vate com­pa­nies won’t be used for ma­lign pur­poses’

al­ready been recorded, the body lan­guage is go­ing to be out of touch, the head move­ments are go­ing to be out of touch.”

Sev­eral tools have been de­vel­oped to pick up these quirks ahead of the 2020 pres­i­den­tial elec­tion. Mi­crosoft, for in­stance, re­cently an­nounced a sys­tem that analy­ses videos and pho­tos and pro­vides a score in­di­cat­ing the chance that they have been ma­nip­u­lated. Adobe has also de­vel­oped a tool that al­lows cre­ators to at­tach at­tri­bu­tion data to content to prove it isn’t fake. It’s not re­al­ism in deepfakes, how­ever, but the propen­sity of peo­ple to be­lieve what they want, that may pose prob­lems.

“Ul­ti­mately, this isn’t ac­tu­ally a prob­lem about tech­nol­ogy … We know that mis­in­for­ma­tion has been around since time im­memo­rial,” says Schick. “It’s re­ally a hu­man prob­lem.”

It may be flawed, but the age of deepfakes has well and truly ar­rived. The tech­nol­ogy al­ready has the po­ten­tial to swing elec­tions, trig­ger wars and aid crim­i­nals. It’s cre­at­ing an over­load of dis­in­for­ma­tion that is cre­at­ing chaos both on­line and off­line.

As Schick puts it: “We are fac­ing a dan­ger of world-chang­ing pro­por­tions… and we’re not ready.”

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.