Imperial Valley Press

State lawmaker wants to take on ‘deepfake’ videos

- BY BEN CHRISTOPHE­R CALMATTERS Filmmaker Jordan Peele created a “deepfake” video of former President Barack Obama. CALmatters.org is a nonprofit, nonpartisa­n media venture explaining California policies and politics.

ACaliforni­a lawmaker says he knew something had to be done after watching a video of Barack Obama calling President Trump “a total and complete dips***.”

Set in what appears to be the Oval Office, the video also depicts the former president speaking fondly of the militant anti-colonial villain of the “Black Panther” comic franchise and claiming that Housing Secretary Ben Carson is brainwashe­d.

The video was a fake, of course — a collaborat­ion between the website Buzzfeed and filmmaker Jordan Peele. It’s Peele who speaks through Obama’s digitally re-rendered mouth to illustrate the dangers of A.I.-constructe­d “deepfake” videos.

With the exception of some discolorat­ion around the jaw and a not-entirely-convincing voice, it’s a solid forgery. And the technology used to make it is only getting better.

“I immediatel­y realized, ‘Wow, this is a technology that plays right into the hands of people who are trying to influence our elections like we saw in 2016,’” said Assemblyma­n Marc Berman, a Democrat whose district includes Silicon Valley.

So Berman, chair of the Assembly’s election committee, has introduced a bill that would make it illegal to “knowingly or recklessly” share “deceptive audio or visual media” of a political candidate within 60 days of an election “with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.” The bill would apply to state-of-the-art deepfakes, as well as to lower-tech fabricatio­ns. It also makes an exception if the video or audio has a clear disclaimer that digital monkey business has been performed.

Libel and forgeries are hardly new phenomena in politics. But as technologi­cal developmen­ts make it increasing­ly difficult to sort fake from real news, and to crack down on the disseminat­ion of false informatio­n once it finds its way online, lawmakers like Berman are struggling to find some way to fight back.

“I don’t want to wake up after the 2020 election, like we did in 2016, and say, ‘Dang, we should have done more,’” said Berman.

But there is at least one limit on what can be done. The First Amendment of the U.S. Constituti­on guarantees the right to free speech — making it unclear

whether a ban on convincing video forgeries would pass constituti­onal muster.

The American Civil Liberties Union of California, the California News Publishers Associatio­n and the California Broadcaste­rs Associatio­n all oppose the bill on First Amendment grounds.

The bill cleared a hurdle Tuesday by winning approval from a Senate committee. But at the hearing Whitney Prout, staff attorney with the publishers’ associatio­n, called the bill “an ineffectiv­e and frankly unconstitu­tional solution that causes more problems than it solves.”

She warned that, if enacted into law, it could discourage social media users from sharing any political content online, lest it be a fake and they be held legally liable. Another possible consequenc­e, she said, is that campaigns plaster every attack ad with a deepfake disclosure to shield themselves from lawsuits, leaving the voting public even more confused.

“The law surroundin­g the First Amendment really has evolved in a pre-Internet world,” said Louis Tompros, a partner at the law office of WilmerHale in Boston and a lecturer at Harvard. The enactment of laws such as the one Berman proposes would “force the courts to really reconcile the whole body of First Amendment law with these new phenomenon.”

The method behind “deepfakery” is technicall­y sophistica­ted, but its producers don’t need to be. These days, anyone with access to a YouTube tutorial and enough computing power can produce their own videograph­ic forgery.

Hence the proliferat­ion of so many comedic or satirical deepfakes. Some strive to make a point, like the one created by Peele or a more recent depiction of Facebook founder Mark Zuckerberg bragging about stealing your data.

Others are just Internet-grade goofy. Consider the Q&A with the actress Jennifer Lawrence who speaks to reporters with the face of Steve Buscemi. (When shown the fake on The Late Show with Stephen Colbert, Buscemi seemed remarkably unfazed; “I’ve never looked better,” he said.)

But the technology has, of course, been used for seedier purposes. The most popular applicatio­n seems to be pornograph­ic, with online forgers digitally grafting the faces of Hollywood celebritie­s onto the bodies of adult film actresses — without the knowledge or consent of either party.

Earlier this year, Berman introduced another bill that would give anyone involuntar­ily depicted in a sexually explicit video — including a digital fake — the right to sue.

But it seems only a matter of time before someone attempts to use the method for political purposes, he said.

Tompros said it would be very difficult to craft a law banning socially harmful deepfakes without sweeping up more traditiona­l forms of political speech.

“Is it a ‘deceptive audio or visual media’ if, for example, I take a 10-minute, very nuanced policy speech and I clip out five seconds in the middle where it sounds like the person is taking an extreme position?” he said.

Under that standard, a significan­t share of attack ads produced over the last half-century would be illegal.

Still, Berman’s proposal is much narrower than past legislativ­e attempts.

In 2017, Assemblyma­n Ed Chau, a Democrat from Monterey Park, introduced a bill that would have banned the online disseminat­ion of any false informatio­n about a political candidate. Chau pulled the bill in the face of fierce pushback from civil liberties groups.

The focus on video and audio specifical­ly could put this year’s proposal on firmer legal ground, said Eugene Volokh, a law professor at UCLA and the founder of the Volokh Conspiracy, a law blog hosted by the libertaria­n magazine, Reason.

Unlike a comment on climate change or the fiscal impact of tax legislatio­n, where there is plenty of “dispute about what the actual truth is … with altered video or altered images at least the person who is originatin­g it will tend to know what’s true and what’s false,” he said.

He points to the 24 states that have criminal defamation laws that make it a punishable offense to knowingly or recklessly spread false informatio­n about a person. The U.S. Supreme Court has generally allowed these laws to remain on the books, although civil liberties organizati­ons are fighting to change that.

Berman said he thinks his bill falls into that same category.

“There are restrictio­ns around the First Amendment, including around the issue of fraud,” said Berman. “I don’t think the First Amendment applies to somebody’s ability to put fake words in my mouth.”

That might have once been a figure of speech, but no more. In the latest iteration of the technology, a handful of researcher­s at Adobe and American and German universiti­es, produced a new editing method that allows anyone to insert new words into a video transcript and have a person in the video speak them.

The effect: using technology to literally put words into someone else’s mouth.

When the researcher­s showed their creations to a small survey of viewers, more than half mistook the fakes for the real thing.

 ?? PHOTO CALMATTERS ??
PHOTO CALMATTERS

Newspapers in English

Newspapers from United States