Should so­cial me­dia should be ac­count­able for ‘deep­fake’ con­tent?

Lodi News-Sentinel - - BUSINESS - By Gopal Rat­nam

WASH­ING­TON — Congress should amend por­tions of U.S. law that al­low so­cial me­dia com­pa­nies to en­joy im­mu­nity for con­tent posted on their plat­forms in light of the sig­nif­i­cant dan­gers posed by ar­ti­fi­cial in­tel­li­gence-en­abled fake videos, a panel of ex­perts told the House In­tel­li­gence Com­mit­tee at a hear­ing Thurs­day.

So­cial me­dia com­pa­nies should be asked to ex­er­cise rea­son­able mod­er­a­tion of con­tent, and U.S. gov­ern­ment agen­cies should ed­u­cate ci­ti­zens on how to tell if a video is fake and in­vest in technologi­es that will aid in such de­ter­mi­na­tions, the ex­perts said.

The hear­ing, led by House In­tel­li­gence Com­mit­tee Chair­man Adam B. Schiff, comes as law­mak­ers and tech­nol­o­gists fear that Rus­sia, China and other for­eign pow­ers are likely to scale up their at­tack on U.S. elec­tions in 2020 with “deep­fake” videos that will leave Amer­i­can vot­ers un­able to dis­tin­guish be­tween real videos and those that are ma­nip­u­lated in­ten­tion­ally.

In 2016, a Krem­lin-backed troll farm cre­ated fake so­cial me­dia ac­counts to mis­lead Amer­i­can vot­ers, but “three years later, we are on the cusp of a tech­no­log­i­cal rev­o­lu­tion that could en­able even more sin­is­ter forms of de­cep­tion and dis­in­for­ma­tion by ma­lign ac­tors, for­eign or do­mes­tic,” Schiff said in his open­ing re­marks at the hear­ing.

Ar­ti­fi­cial in­tel­li­gence technologi­es now al­low video and au­dio of a per­son to be ma­nip­u­lated to make the per­son look and say things the per­son has never said or done. Such videos “en­able ma­li­cious ac­tors to fo­ment chaos, divi­sion or cri­sis and they have the ca­pac­ity to dis­rupt en­tire cam­paigns, in­clud­ing that for the pres­i­dency,” Schiff said.

Hav­ing un­wit­tingly en­abled fake ac­counts on their plat­forms in 2016, so­cial me­dia com­pa­nies once again face scru­tiny in how they han­dle mis­lead­ing videos. Last month, Face­book faced in­tense crit­i­cism for a doc­tored video — al­tered us­ing old-fash­ioned edit­ing means — of Speaker Nancy Pelosi that shows her ap­pear­ing to slur her words, as if she’s in­tox­i­cated. Face­book re­fused to take down the video and has said it would tweak its al­go­rithm to re­duce ex­po­sure for the video.

The Com­mu­ni­ca­tions De­cency Act, which ex­empts so­cial me­dia com­pa­nies from be­ing con­sid­ered pub­lish­ers of ma­te­rial that ap­pears on their plat­forms, may be al­low­ing too much lee­way to the com­pa­nies, Schiff said. “Should we do away with that im­mu­nity?”

Congress should amend the law “to con­di­tion the im­mu­nity on rea­son­able mod­er­a­tion prac­tices rather than the free pass that ex­ists to­day,” Danielle Citron, a law pro­fes­sor at the Univer­sity of Mary­land told the com­mit­tee. The cur­rent ex­emp­tion in law gives the so­cial me­dia com­pa­nies no in­cen­tive to take down “de­struc­tive, deep­fake con­tent,” she said.

Citron said deep­fake videos not only can be used by for­eign and do­mes­tic per­pe­tra­tors against po­lit­i­cal op­po­nents but could be used to hurt com­pa­nies, for ex­am­ple, by hav­ing the CEO say some­thing deroga­tory just hours be­fore a pub­lic of­fer­ing, which could lead to a col­lapse in its stock prices.

Face­book co-founder and CEO Mark Zucker­berg also has called for reg­u­lat­ing on­line plat­forms, and in an op-ed in The Wash­ing­ton Post in March, he wrote such reg­u­la­tions should ad­dress harm­ful con­tent, elec­tion in­tegrity, pri­vacy and al­low­ing users to take their data with them.

Tweak­ing the law too broadly to sug­gest that all forms of ma­nip­u­lated videos should be taken down could, how­ever, hurt satir­i­cal takes on politi­cians and those in power, said Clint Watts, a se­nior fel­low at the Ger­man Mar­shall Fund.

Fed­eral agen­cies should quickly re­fute fake videos with fac­tual con­tent, politi­cians of both par­ties and their cam­paign staff should work with so­cial me­dia com­pa­nies to re­spond quickly to smears, and the ad­min­is­tra­tion should de­velop ag­gres­sive mea­sures, in­clud­ing sanc­tions, to go af­ter for­eign troll farms that pro­mote fake videos, Watts said.

Tech­nol­ogy is ad­vanc­ing so fast that it is now pos­si­ble for per­pe­tra­tors to cover up ev­i­dence that they have ma­nip­u­lated a video, said David Do­er­mann, a pro­fes­sor of com­puter science at the Univer­sity at Buf­falo. “A lot of trace ev­i­dence can be de­stroyed with sim­ple ma­nip­u­la­tion on top of deep­fake.”

Still, the Pen­tagon’s De­fense Ad­vanced Re­search Projects Agency, or DARPA, where Do­er­mann pre­vi­ously worked, has been de­vel­op­ing technologi­es to tell if an im­age or a video has been al­tered, he said.

At the mo­ment, such tech­niques to de­tect fakes can be ap­plied to one case at a time, but the “prob­lem is do­ing it at scale,” Do­er­mann said.

If the tech­niques for spot­ting video ma­nip­u­la­tion can be au­to­mated, then plat­forms such as Face­book and Twit­ter could de­tect and stop deep­fake videos be­fore they’re pub­lished on­line, in­stead of try­ing to spot and stop them af­ter the fact, Do­er­mann said.

Once fake videos and ru­mors are pub­lished it is hard to dis­lodge the bad in­for­ma­tion from peo­ple’s minds, said Jack Clark, the pol­icy direc­tor for OpenAI, a re­search or­ga­ni­za­tion that ad­vo­cates for safe ar­ti­fi­cial in­tel­li­gence technologi­es.

“Fact checks tend not to travel as much as the ini­tial mes­sage,” Clark said.


Face­book CEO Mark Zucker­berg leaves The Mer­rion Ho­tel in Dublin af­ter a meet­ing with politi­cians to dis­cuss reg­u­la­tion of so­cial me­dia and harm­ful con­tent on April 2. A panel of in­tel­li­gence ex­perts said so­cial me­dia com­pa­nies should be ac­count­able for "deep­fake" videos.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.