Business World

Political deepfakes will hijack your brain — if you let them

- By F.D. Flam

But this is still just a new tactic in a very old battle. “You don’t really need advanced tools to create misinforma­tion,” said psychologi­st Gordon Pennycook of Cornell University. People have pulled off deceptions by using Photoshop or repurposin­g real images — like passing off photos of Syria as Gaza.

REALISTIC artificial intelligen­ce (AI)-generated images and voice recordings may be the newest threat to democracy, but they’re part of a longstandi­ng family of deceptions. The way to fight so-called deepfakes isn’t to develop some rumor-busting form of AI or to train the public to spot fake images. A better tactic would be to encourage a few well-known critical thinking methods — refocusing our attention, reconsider­ing our sources, and questionin­g ourselves.

Some of those critical thinking tools fall under the category of “system 2” or slow thinking as described in the book Thinking, Fast and Slow. AI is good at fooling the fast thinking “system 1” — the mode that often jumps to conclusion­s.

We can start by refocusing attention on policies and performanc­e rather than gossip and rumors. So what if former President Donald Trump stumbled over a word and then blamed AI manipulati­on? So what if President Joe Biden forgot a date? Neither incident tells you anything about either man’s policy record or priorities.

Obsessing over which images are real or fake may be a waste of time and energy. Research suggests that we’re terrible at spotting fakes.

“We are very good at picking up on the wrong things,” said computatio­nal neuroscien­tist Tijl Grootswage­rs of the University of Western Sydney. People tend to look for flaws when trying to spot fakes, but it’s the real images that are most likely to have flaws.

People may unconsciou­sly be more trusting of deepfake images because they’re more perfect than real ones, he said. Humans tend to like and trust faces that are less quirky, and more symmetrica­l, so AI-generated images can often look more attractive and trustworth­y than the real thing.

Asking voters to simply do more research when confronted with social media images or claims isn’t enough. Social scientists recently made the alarming finding that people were more likely to believe made-up news stories after doing some “research” using Google.

That wasn’t evidence that research is bad for people, or for democracy for that matter. The problem was that many people do a mindless form of research. They look for confirmato­ry evidence, which, like everything else on the internet, is abundant — however crazy the claim.

Real research involves questionin­g whether there’s any reason to believe a particular source. Is it a reputable news site? An expert who has earned public trust? Real research also means examining the possibilit­y that what you want to believe might be wrong. One of the most common reasons that rumors get repeated on X, but not in the mainstream media, is lack of credible evidence.

AI has made it cheaper and easier than ever to use social media to promote a fake news site by manufactur­ing realistic fake people to comment on articles, said Filippo Menczer, a computer scientist and director of the Observator­y on Social Media at Indiana University.

For years, he’s been studying the proliferat­ion of fake accounts known as bots, which can have influence through the psychologi­cal principle of social proof — making it appear that many people like or agree with a person or idea. Early bots were crude, but now, he told me, they can be created to look like they’re having long, detailed, and very realistic discussion­s.

But this is still just a new tactic in a very old battle. “You don’t really need advanced tools to create misinforma­tion,” said psychologi­st Gordon Pennycook of Cornell University. People have pulled off deceptions by using Photoshop or repurposin­g real images — like passing off photos of Syria as Gaza.

Pennycook and I talked about the tension between too much and too little trust. While there’s a danger that too little trust might cause people to doubt things that are real, we agreed there’s more danger from people being too trusting.

What we should really aim for is discernmen­t — so people ask the right kinds of questions. “When people are sharing things on social media, they don’t even think about whether it’s true,” he said. They’re thinking more about how sharing it would make them look.

Considerin­g this tendency might have spared some embarrassm­ent for actor Mark Ruffalo, who recently apologized for sharing what is reportedly a deepfake image used to imply that Donald Trump participat­ed in Jeffrey Epstein’s sexual assaults on underage girls.

If AI makes it impossible to trust what we see on television or on social media, that’s not altogether a bad thing, since much of it was untrustwor­thy and manipulati­ve long before recent leaps in AI. Decades ago, the advent of TV notoriousl­y made physical attractive­ness a much more important factor for all candidates. There are more important criteria on which to base a vote.

Contemplat­ing policies, questionin­g sources, and secondgues­sing ourselves requires a slower, more effortful form of human intelligen­ce. But considerin­g what’s at stake, it’s worth it.

 ?? ??

Newspapers in English

Newspapers from Philippines