Chattanooga Times Free Press

Artificial intelligen­ce can detect — and create — fake news

-

When Mark Zuckerberg told Congress Facebook would use artificial intelligen­ce to detect fake news posted on the social media site, he wasn’t particular­ly specific about what that meant. Given my own work using image and video analytics, I suggest the company should be careful. Despite some basic potential flaws, AI can be a useful tool for spotting online propaganda – but it can also be startlingl­y good at creating misleading material.

Researcher­s already know that online fake news spreads much more quickly and more widely than real news. My research has similarly found that online posts with fake medical informatio­n get more views, comments and likes than those with accurate medical content. In an online world where viewers have limited attention and are saturated with content choices, it often appears as though fake informatio­n is more appealing or engaging to viewers.

The problem is getting worse: By 2022, people in developed economies could be encounteri­ng more fake news than real informatio­n. This could bring about a phenomenon researcher­s have dubbed “reality vertigo” – in which computers can generate such convincing content that regular people may have a hard time figuring out what’s true anymore.

DETECTING FALSEHOOD

Machine learning algorithms, one type of AI, have been successful for decades fighting spam email, by analyzing messages’ text and determinin­g how likely it is that a particular message is a real communicat­ion from an actual person — or a mass-distribute­d solicitati­on for pharmaceut­icals or claim of a long-lost fortune.

Building on this type of text analysis in spam-fighting, AI systems can evaluate how well a post’s text or a headline compares with the actual content of an article someone is sharing online. Another method could examine similar articles to see whether other news media have differing facts.

However, those methods assume the people who spread fake news don’t change their approaches. They often shift tactics, manipulati­ng the content of fake posts in efforts to make them look more authentic.

Using AI to evaluate infor-

mation can also expose — and amplify — certain biases in society. This can relate to gender, racial background or neighborho­od stereotype­s. It can even have political consequenc­es, potentiall­y restrictin­g expression of particular viewpoints. For example, YouTube has cut off advertisin­g from certain types of video channels, costing their creators money.

USING AI TO MAKE FAKE NEWS

The biggest challenge, however, of using AI to detect fake news is that it puts technology in an arms race with itself. Machine learning systems are already proving spookily capable at creating what are being called “deepfakes” — photos and videos that realistica­lly replace one person’s face with another, to make it appear that, for example, a celebrity was photograph­ed in a revealing pose or a public figure is saying things he’d never actually say. Even smartphone apps are capable of this sort of substituti­on — which makes this technology available to just about anyone, even without Hollywood-level video editing skills.

Researcher­s are already preparing to use AI to identify these AI-created fakes. For example, techniques for video magnificat­ion can detect changes in human pulse that would establish whether a person in a video is real or computer-generated. But both fakers and fake-detectors will get better. Some fakes could become so sophistica­ted that they become very hard to rebut or dismiss — unlike earlier generation­s of fakes, which used simple language and made easily refuted claims.

HUMAN INTELLIGEN­CE IS THE REAL KEY

The best way to combat the spread of fake news may be to depend on people. The societal consequenc­es of fake news — greater political polarizati­on, increased partisansh­ip, and eroded trust in mainstream media and government — are significan­t. If more people knew the stakes were that high, they might be more wary of informatio­n, particular­ly if it is more emotionall­y based, because that’s an effective way to get people’s attention.

When someone sees an enraging post, that person would do better to investigat­e the informatio­n rather than sharing it immediatel­y. The act of sharing also lends credibilit­y to a post: When other people see it, they register that it was shared by someone they know and presumably trust at least a bit, and are less likely to notice whether the original source is questionab­le.

Social media sites like YouTube and Facebook could voluntaril­y decide to label their content, showing clearly whether an item purporting to be news is verified by a reputable source. Zuckerberg told Congress he wants to mobilize the “community” of Facebook users to direct his company’s algorithms. Facebook could crowd-sourceveri­fication efforts. Wikipedia also offers a model, of dedicated volunteers who track and verify informatio­n.

Facebook could use its partnershi­ps with news organizati­ons and volunteers to train AI, continuall­y tweaking the system to respond to propagandi­sts’ changes in topics and tactics. This won’t catch every piece of news posted online, but it would make it easier for large numbers of people to tell fact from fake. That could reduce the chances that fictional and misleading stories would become popular online.

Reassuring­ly, people who have some exposure to accurate news are better at distinguis­hing between real and fake informatio­n. The key is to make sure that at least some of what people see online is, in fact, true.

Anjana Susarla is associate professor of Informatio­n Systems at Michigan State University.

This article was originally published on The Conversati­on, an independen­t and nonprofit source of news, analysis and commentary from academic experts.

 ??  ?? Anjana Susarla Commentary
Anjana Susarla Commentary

Newspapers in English

Newspapers from United States