ChatGPT maker launches AI detection tool
SAN FRANCISCO — The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.
The new AI Text Classifier launched Tuesday by OpenAI follows a weekslong discussion about fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty.
OpenAI cautions that its new tool — like others already available — is not foolproof. The method for detecting AI-written text “is imperfect and it will be wrong sometimes,” said Jan Leike, head of the OpenAI team tasked to make its systems safer.
“Because of that, it shouldn’t be solely relied upon when making decisions,” Leike said.
Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched Nov. 30 as a free application on OpenAI’s website.
By the time schools opened for the new year, New York City, Los Angeles and other big public school districts began to block its use in classrooms and on school devices.
OpenAI emphasized the limitations of its detection tool in a blog post Tuesday, but said that in addition to deterring plagiarism, it could help to detect automated disinformation campaigns and other misuse of AI to mimic humans.
The longer a passage of text, the better the tool is at detecting if an AI or human wrote something. Type in any text and the tool will label it as either “very unlikely, unlikely, unclear if it is, possibly, or likely” AI generated.
But much like ChatGPT itself, which was trained on a huge trove of digitized books, newspapers and online writings but often spits out falsehoods or nonsense, it’s not easy to interpret how it came up with a result. “We don’t fundamentally know what kind of pattern it pays attention to, or how it works internally,” Leike said. “There’s really not much we could say at this point about how the classifier actually works.”