GPT-3 is too offensive
None of the AI labs know how to make GPT-3 more PC.
$NA | openai.com
There’s still plenty of kinks to iron out of large language models, but one that seems to be particularly resistant to improvement is making GPT-3 have opinions less like your senile great aunt. Anyone who encountered the racist rampage of Microsoft’s AI chatbot Tay on Twitter after trolls spammed it, or IBM’s Watson getting permission to peruse Urban Dictionary, knows that AI can get pretty politically incorrect pretty quickly when trained on the wrong material, but Open AI used 45TB (22.5 trillion words) of data from published books and internet sources to train GPT-3, which means you’d pretty much have to rewrite literature to get a different outcome.