Musk- backed AI firm scared by its own work
Refuses to release text generator to prevent deepfakes
Research group OpenAI, which wanted to train their new text generation software to predict the next word in a sentence, has put the brakes after the neural network blew away all of their expectations, according to multiple reports.
One such reports mention that the neural network was so good at mimicking writing that the research decided to explore the damage it could do before moving ahead further.
The group, which is backed by Elon Musk, appears to share Musk’s concern. Out of fear that the text generation technology can be abused by bad actors, OpenAI deviated from their standard practice of releasing the full research to public.
Instead, it’s releasing a smaller model to experiment with.
After the neural network had completed its training process, the researchers found that
the software could be fed a small amount of text and convincingly continue writing at length based on the prompt.
However, it had trouble with “highly technical or esoteric types of content” but when it came to
more conversational writing it generated “reasonable samples” 50 per cent of the time.
The researchers found that GPT- 2 performed very well when it was given tasks that it wasn’t necessarily designed for, like translation and summarisation.
These excellent results were enough to stun the researchers as they were concern that the technology would be used to turbo- charge fake news operations.
The Guardian published a fake news article written by the software along with its coverage of the research.
The article is readable and contains fake quotes that are on topic and realistic. The grammar is better than a lot what you’d see from fake news content mills.
And according to The Guardian’s Alex Hern, it only took 15 seconds for the software to write the article.
Other concerns that the researchers listed as potentially abusive included automating phishing emails, impersonating others online, and self- generating harassment.