Researchers keep AI software under wraps
WASHINGTON: Researchers this week announced they had developed an automatic text generator using artificial intelligence which is very good — so good, it is keeping details private for now.
That software developed by OpenAI could be used to generate news stories, product reviews and other kinds of writing, which may be more realistic than anything developed before by computer.
OpenAI, a research centre backed by Tesla’s Elon Musk, Amazon and Microsoft, said the new software “achieves state-ofthe-art performance on many language modelling benchmarks”, including summarisation and translating.
But it will not be releasing the programme to the public.
“Due to our concerns about malicious applications of the technology, we are not releasing the trained model,” said the OpenAI researchers on Thursday.
The news suggested a potential breakthrough in efforts to develop computer-generated text which may be believable, but also potentially dangerous.
The researchers said there were numerous ways the programme could be used for nefarious purposes, including to generate fake news, impersonating others online, and automating fake content on social media.
In one example, the programme was fed one paragraph about “a herd of unicorns living in a remote, previously unexplored valley in the Andes Mountains”, and wrote a 300-word news story about it.
“The public at large will need to become more sceptical of text they find online, just as the ‘deepfakes’ phenomenon calls for more scepticism about images,” the researchers wrote, referring to AI-manipulated videos, which have been on the rise.
The researchers said their model, called GPT-2, “outperforms other language models” trained on tasks such as Wikipedia entries, news or books without needing any specific training.