The Boston Globe

Rise of AI fake news is creating flood of misinforma­tion

Manufactur­ed stories harming political process

- By Pranshu Verma

Artificial intelligen­ce is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminat­e false informatio­n about elections, wars, and natural disasters.

Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organizati­on that tracks misinforma­tion.

Historical­ly, propaganda operations have relied on armies of low-paid workers or highly coordinate­d intelligen­ce organizati­ons to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differenti­ate from real news.

One AI-generated article recounted a made-up story about Benjamin Netanyahu’s psychiatri­st, a NewsGuard investigat­ion found, alleging that he had died and left behind a note suggesting the involvemen­t of the Israeli prime minister. The psychiatri­st appears to not exist, but the claim was featured on an Iranian TV show, and was recirculat­ed on Arabic, English, and Indonesian media sites and spread by users on TikTok, Reddit, and Instagram.

The heightened churn of polarizing and misleading content may make it difficult to know what is true — harming political candidates, military leaders, and aid efforts. Misinforma­tion experts said the rapid growth of these sites is particular­ly worrisome in the run-up to the 2024 elections.

‘‘Some of these sites are generating hundreds if not thousands of articles a day,’’ said Jack Brewster, a researcher at NewsGuard who conducted the investigat­ion.

Generative artificial intelligen­ce has ushered in an era in which chatbots, image makers, and voice cloners can produce content that seems humanmade.

Well-dressed AI-generated news anchors are spewing proChinese propaganda, amplified by bot networks sympatheti­c to Beijing. In Slovakia, politician­s up for election found their voices had been cloned to say controvers­ial things they never uttered, days before voters went to the polls. A growing number of websites, with generic names such as iBusiness Day or Ireland Top News, are delivering fake news made to look genuine, in dozens of languages from Arabic to Thai.

Readers can easily be fooled by the websites.

Global Village Space, which published the piece on Netanyahu’s alleged psychiatri­st, is flooded with articles on a variety of serious topics. There are pieces detailing US sanctions on Russian weapons suppliers; the oil behemoth Saudi Aramco’s investment­s in Pakistan; and the United States’ increasing­ly tenuous relationsh­ip with China.

The site also contains essays written by a Middle East think tank expert, a Harvard-educated lawyer and the site’s chief executive, Moeed Pirzada, a television news anchor from Pakistan. (Pirzada did not respond to a request for comment. Two contributo­rs confirmed they have written articles appearing on Global Village Space.)

But sandwiched in with these ordinary stories are AI-generated articles, Brewster said, such as the piece on Netanyahu’s psychiatri­st, which was relabeled as ‘‘satire’’ after NewsGuard reached out to the organizati­on during its investigat­ion. NewsGuard says the story appears to have been based on a satirical piece published in June 2010, which made similar claims about an Israeli psychiatri­st’s death.

Having real and AI-generated news side-by-side makes deceptive stories more believable. ‘‘You have people that simply are not media-literate enough to know that this is false,’’ said Jeffrey Blevins, a misinforma­tion expert and journalism professor at the University of Cincinnati. ‘‘It’s misleading.’’

Websites similar to Global Village Space may proliferat­e during the 2024 election, becoming an efficient way to distribute misinforma­tion, media and AI experts said.

The sites work in two ways, Brewster said. Some stories are created manually, with people asking chatbots for articles that amplify a certain political narrative and posting the result to a website. The process can also be automatic, with web scrapers searching for articles that contain certain keywords, and feeding those stories into a large language model that rewrites them to sound unique and evade plagiarism allegation­s. The result is automatica­lly posted online.

NewsGuard locates AI-generated sites by scanning for error messages or other language that ‘‘indicates that the content was produced by AI tools without adequate editing,’’ the organizati­on says.

The motivation­s for creating these sites vary. Some are intended to sway political beliefs or wreak havoc. Other sites churn out polarizing content to draw clicks and capture ad revenue, Brewster said. But the ability to turbocharg­e fake content is a significan­t security risk, he added.

Technology has long fueled misinforma­tion. In the lead-up to the 2020 election, Eastern European troll farms — profession­al groups that promote propaganda — built large audiences on Facebook disseminat­ing provocativ­e content on Black and Christian group pages, reaching 140 million users per month.

Pink-slime journalism sites, named after the meat byproduct, often crop up in small towns where local news outlets have disappeare­d, generating articles that benefit the financiers that fund the operation, according to the media watchdog Poynter.

 ?? ASSOCIATED PRESS FILES ?? Experts say misinforma­tion is worrisome in the run-up to the 2024 elections.
ASSOCIATED PRESS FILES Experts say misinforma­tion is worrisome in the run-up to the 2024 elections.

Newspapers in English

Newspapers from United States