Santa Fe New Mexican

AI makes up fake news stories from a few words

- By Jeremy Kahn Bloomberg

OpenAI, an artificial intelligen­ce research group co-founded by billionair­e Elon Musk, has demonstrat­ed a piece of software that can produce authenticl­ooking fake news articles after being given just a few pieces of informatio­n.

In an example published Thursday by OpenAI, the system was given some sample text: “A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabout are unknown.” From this, the software was able to generate a convincing sevenparag­raph news story, including quotes from government officials, with the only caveat being that it was entirely untrue.

“The texts that they are able to generate from prompts are fairly stunning,” said Sam Bowman, a computer scientist at New York University who specialize­s in natural language processing and who was not involved in the OpenAI project, but was briefed on it. “It’s able to do things that are qualitativ­ely much more sophistica­ted than anything we’ve seen before.”

OpenAI is aware of the concerns around fake news, said Jack Clark, the organizati­on’s policy director. “One of the not-so-good purposes would be disinforma­tion because it can produce things that sound coherent but which are not accurate,” he said.

As a precaution, OpenAI decided not to publish or release the most sophistica­ted versions of its software. It has, however, created a tool that lets policymake­rs, journalist­s, writers and artists experiment with the algorithm to see what kind of text it can generate and what other sorts of tasks it can perform.

The potential for software to be able to be able to nearinstan­tly create fake news articles comes during global concerns over technology’s role in the spread of disinforma­tion. European regulators have threatened action if tech firms don’t do more to prevent their products helping sway voters, and Facebook says it has been working since the 2016 U.S. election to try to contain disinforma­tion on its platform.

Clark and Bowman both said that, for now, the system’s abilities are not consistent enough to pose an immediate threat. “This is not a shovel-ready technology today, and that’s a good thing,” Clark said.

Unveiled in a paper and a blog post Thursday, OpenAI’s creation is trained for a task known as language modeling, which involves predicting the next word of a piece of text based on knowledge of all previous words, similar to how auto-complete works when typing an email on a mobile phone. It can also be used for translatio­n and open-ended question answering.

One potential use is to help creative writers generate ideas or dialogue, said Jeff Wu, a researcher at OpenAI who worked on the project. Others include checking for grammatica­l errors in texts, or hunting for bugs in software code. The system could be fine-tuned to summarize text for corporate or government decision-makers further in the future, he said.

In the past year, researcher­s have made a number of sudden leaps in language processing. In November, Google unveiled a similarly multitalen­ted algorithm called BERT, which can understand and answer questions.

Newspapers in English

Newspapers from United States