Rise of AI fake news is creating flood of misinformation
Manufactured stories harming political process
Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminate false information about elections, wars, and natural disasters.
Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.
Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.
One AI-generated article recounted a made-up story about Benjamin Netanyahu’s psychiatrist, a NewsGuard investigation found, alleging that he had died and left behind a note suggesting the involvement of the Israeli prime minister. The psychiatrist appears to not exist, but the claim was featured on an Iranian TV show, and was recirculated on Arabic, English, and Indonesian media sites and spread by users on TikTok, Reddit, and Instagram.
The heightened churn of polarizing and misleading content may make it difficult to know what is true — harming political candidates, military leaders, and aid efforts. Misinformation experts said the rapid growth of these sites is particularly worrisome in the run-up to the 2024 elections.
‘‘Some of these sites are generating hundreds if not thousands of articles a day,’’ said Jack Brewster, a researcher at NewsGuard who conducted the investigation.
Generative artificial intelligence has ushered in an era in which chatbots, image makers, and voice cloners can produce content that seems humanmade.
Well-dressed AI-generated news anchors are spewing proChinese propaganda, amplified by bot networks sympathetic to Beijing. In Slovakia, politicians up for election found their voices had been cloned to say controversial things they never uttered, days before voters went to the polls. A growing number of websites, with generic names such as iBusiness Day or Ireland Top News, are delivering fake news made to look genuine, in dozens of languages from Arabic to Thai.
Readers can easily be fooled by the websites.
Global Village Space, which published the piece on Netanyahu’s alleged psychiatrist, is flooded with articles on a variety of serious topics. There are pieces detailing US sanctions on Russian weapons suppliers; the oil behemoth Saudi Aramco’s investments in Pakistan; and the United States’ increasingly tenuous relationship with China.
The site also contains essays written by a Middle East think tank expert, a Harvard-educated lawyer and the site’s chief executive, Moeed Pirzada, a television news anchor from Pakistan. (Pirzada did not respond to a request for comment. Two contributors confirmed they have written articles appearing on Global Village Space.)
But sandwiched in with these ordinary stories are AI-generated articles, Brewster said, such as the piece on Netanyahu’s psychiatrist, which was relabeled as ‘‘satire’’ after NewsGuard reached out to the organization during its investigation. NewsGuard says the story appears to have been based on a satirical piece published in June 2010, which made similar claims about an Israeli psychiatrist’s death.
Having real and AI-generated news side-by-side makes deceptive stories more believable. ‘‘You have people that simply are not media-literate enough to know that this is false,’’ said Jeffrey Blevins, a misinformation expert and journalism professor at the University of Cincinnati. ‘‘It’s misleading.’’
Websites similar to Global Village Space may proliferate during the 2024 election, becoming an efficient way to distribute misinformation, media and AI experts said.
The sites work in two ways, Brewster said. Some stories are created manually, with people asking chatbots for articles that amplify a certain political narrative and posting the result to a website. The process can also be automatic, with web scrapers searching for articles that contain certain keywords, and feeding those stories into a large language model that rewrites them to sound unique and evade plagiarism allegations. The result is automatically posted online.
NewsGuard locates AI-generated sites by scanning for error messages or other language that ‘‘indicates that the content was produced by AI tools without adequate editing,’’ the organization says.
The motivations for creating these sites vary. Some are intended to sway political beliefs or wreak havoc. Other sites churn out polarizing content to draw clicks and capture ad revenue, Brewster said. But the ability to turbocharge fake content is a significant security risk, he added.
Technology has long fueled misinformation. In the lead-up to the 2020 election, Eastern European troll farms — professional groups that promote propaganda — built large audiences on Facebook disseminating provocative content on Black and Christian group pages, reaching 140 million users per month.
Pink-slime journalism sites, named after the meat byproduct, often crop up in small towns where local news outlets have disappeared, generating articles that benefit the financiers that fund the operation, according to the media watchdog Poynter.