Los Angeles Times (Sunday)

An FAQ from the future — how we finally defeated deepfakes

We can’t agree on guardrails against them. But this fictional discussion, from five years from now, shows that 2024 events may force the issue

- By Michael Rogers ast your mind selfies? The complicati­ons were such that no system was widely adopted. Correct. The breakthrou­gh actually came in early 2026 from a working group of digital journalist­s from U.S. and internatio­nal news organizati­ons. Their

Cforward. It’s Nov. 8, 2028, the day after another presidenti­al election. This one went smoothly — no claims of rampant rigging, no significan­t taint of skuldugger­y — due in large part to the defeat of deepfakes, democracy’s newest enemy.

Is such a future possible? So far, neither government nor the tech industry has agreed on effective guardrails against deepfakes. But this FAQ ( from five years in the future) shows that the events of 2024 may well force the issue — and that a solution is possible.

Why did it take so long to find an effective way to fight deepfakes?

Late in 2022, sophistica­ted lowcost AI software appeared that made it easy to create realistic audio, video and photograph­s — socalled deepfakes. As these generative AI programs rapidly improved, it grew clear that deepfake content would be a danger to democracy.

Political deepfakes — both audio and video — soon emerged: President Biden announcing that Americans would be drafted to fight in Ukraine. A photo of Donald Trump hugging and kissing Dr. Anthony Fauci. Eric Adams, the monolingua­l mayor of New York, speaking Spanish, Yiddish and Mandarin in AI-produced robocalls.

Very quickly, the White House, the European Union and major technology companies all launched wide-ranging AI regulation proposals that included “watermarki­ng” AI content — inserting ID labels, a permanent bit of computer code, into the digital file of any AI-generated content to identify its artificial origin.

But AI rule-setting proved complex, and labeling exemplifie­d the quandaries: Would AI watermarki­ng be legally required? How would it be enforced? As early as 2023, some cellphone cameras used AI in their image processing. What amount of AI input into content would require an identifier? Would an Instagram beauty influencer need to watermark her facetuned

What changed?

The largest coordinate­d deepfake attack in history took place the day after the November 2024 election. Every U.S. social media channel was flooded with phony audio, video and still images depicting election fraud in a dozen battlegrou­nd states, highly realistic content that within hours was viewed by millions. Debunking efforts by media and government were hindered by a steady flow of new deepfakes, mostly manufactur­ed in Russia, North Korea, China and Iran. The attack generated legal and civil chaos that lasted well into the spring of 2025.

Yet none of the early authentica­tion efforts was adopted? to find a way to keep deepfakes out of news reports, so they could protect what credibilit­y the mainstream media still retained. It was a logical assignment: Journalist­s are historical­ly ruthless about punishing their peers for misbehavio­r, breaking out the tar and feathers for even minor departures from factual rigor.

Journalism organizati­ons formed the FAC Alliance — “Fact Authentica­ted Content” — based on a simple insight: There was already far too much AI fakery loose in the world to try to enforce a watermarki­ng system for dis- and misinforma­tion. And even the strictest labeling rules would simply be ignored by bad actors. But it would be possible to watermark pieces of content that weren’t deepfakes.

And so was born the voluntary FACStamp on May 1, 2026.

What does a FACStamp look like? signal can be turned off by the user, or it can be set to appear for only five or 10 seconds at the start of a media stream.

FACStamps are entirely voluntary. But every member of the FAC Alliance pledged that their internet, broadcast and physical reports would publish only FACStamped media in their news sections.

How does content qualify for a FACStamp?

The newest phones, tablets, cameras, recorders and desktop computers all include software that automatica­lly inserts the FACStamp code into every piece of visual or audio content as it’s captured, before any AI modificati­on can be applied. This proves that the image, sound or video was not generated by AI. You can also download the FAC app, which does the same for older equipment. The FACStamp is what technologi­sts call “fragile”: The first time an image, video or audio file is falsified by AI, the stamp disappears.

But AI is often used appropriat­ely for doing things like reducing background noise in an audio file. FacStamped content can’t be edited at all?

It certainly can. But to retain the FACStamp, your computer must be connected to the nonprofit FAC Verificati­on Center. The center’s computers detect if the editing is minor — such as cropping or even cosmetic face-tuning — and the stamp remains. Any larger manipulati­on, from swapping faces to faking background­s, and the FACStamp vanishes.

How did FACStamps spread beyond journalism?

It turned out that plenty of people could use the FACStamp. Internet retailers embraced FACStamps for videos and images of their products. Individual­s soon followed, using FACStamps to sell goods online — when potential buyers are judging a used pickup truck or secondhand sofa, it’s reassuring to know that the image wasn’t spun out or scrubbed up by AI.In 2027 the stamp began to appear in social media. Any parent can artificial­ly generate a perfectly realistic image of their happy family standing in front of the Eiffel Tower and post it or email it to envious friends. A FACStamp proves the family has actually been there.

Dating app profiles without FACStamps finally are growing rare. Videoconfe­rence apps have FAC options to ensure that everyone on the call is real. And for influencer­s, it’s increasing­ly difficult to claim “authentici­ty” without at least the occasional FACStamp.

What’s next?

A bipartisan group of senators and House members plans to introduce the Right to Reality Act when the next Congress opens in January 2029. It will mandate the use of FACStamps in multiple sectors, including local government, shopping sites, and investment and real estate offerings. Counterfei­ting a FACStamp would become a criminal offense. Polling indicates widespread public support for the act, and the FAC Alliance has already begun a branding campaign.

The tagline: “Is that a FAC?” like then-popular conservati­ve contrarian Milos Yiannopoul­os should not be banned.

In the Washington Post last month, Harvard professor Danielle Allen, a contempora­ry of Gay’s, who teaches political philosophy, ethics and public policy, wrote about her experience­s trying to balance competing values on campus. A proponent of DEI, she also — quite reasonably — believes it needs to be reformed.

“We have been focused so much on academic freedom and free speech,” she wrote, “that we have neglected to set standards for a culture of mutual respect.”

This might seem like a counterint­uitive take from a liberal scholar who was a co-chair of Harvard’s Presidenti­al Task Force on Inclusion and Belonging, which produced an 82-page blueprint focused on ways to promote inclusion and mutual respect among vastly different constituen­cies. But on further examinatio­n, it isn’t at all.

Allen is a realist: “Across the country, DEI bureaucrac­ies have been responsibl­e for numerous assaults on common sense” — certain mandatory diversity training initiative­s come to mind — “but the values of lowercase-i inclusion and lowercase-d diversity remain foundation­al to healthy democracy.”

They surely do, and despite the efforts of folks like Rufo and DeSantis, they always will.

 ?? Illustrati­on by Jim Cooke Los Angeles Times; photo by Associated Press ??
Illustrati­on by Jim Cooke Los Angeles Times; photo by Associated Press

Newspapers in English

Newspapers from United States