USA TODAY International Edition
ChatGPT raises misinformation concern
Lightning fast tool can’t tell fact from fiction
In less time than it takes me to write this sentence, ChatGPT, the free artificial intelligence computer program that writes human- sounding answers to just about anything you ask, will spit out a 500- word essay explaining quantum physics with literary flair.
“Once upon a time, there was a strange and mysterious world that existed alongside our own,” the response begins. It continues with a physics professor sitting alone in his office on a dark and stormy night ( of course), “his mind consumed by the mysteries of quantum physics. ... It was a power that could bend the very fabric of space and time, and twist the rules of reality itself,” the chat window reads.
Wow, the ChatGPT answer is both eerily entertaining and oddly educational. In the end, the old professor figures it all out and shares his knowledge with the world. The essay is cool and creepy, especially these last two sentences:
“His theory changes the way we see the world and leads to new technologies, but also unlocks a door to powers beyond human comprehension, that can be used for good or evil. It forever changes the future of humanity.”
Yes, it could be talking about itself.
ChatGPT ( Generative Pre- trained Transformer) is the latest viral sensation out of San Francisco- based startup OpenAI.
It’s a free online tool trained on millions of pages of writing from all corners of the internet to understand and respond to text- based queries in just about any style you want.
When I ask it to explain ChatGPT to my mom, it cranks out, “ChatGPT is a computer program that uses artificial intelligence ( AI) to understand and respond to natural language text, just like a human would. It can answer questions, write sentences, and even have a conversation with you. It’s like having your own personal robot that can understand and talk to you!”
The easiest way to get a picture of its powers is to try it out for yourself. It’s free, you just need to register for an account, then ask it a question.
You can even prompt it to write something for you – anything really and in any style – from a poem using your child’s name to song lyrics about your dog, business taglines, essays, research papers, and even software code. It types out responses in a few seconds and follows up in the same thread if you don’t like the first answer.
ChatGPT launched as a prototype to the public Nov. 30, 2022. Within five days, more than a million people were using it.
By comparison, it took Netflix 31⁄
2 years to get that many people on board. Facebook didn’t crack its first million people for 10 months, and Spotify went five months before it reached that million user mark.
Microsoft confirmed on Monday that it’s making a “multiyear, multibilliondollar” investment in OpenAI, and while they didn’t disclose the specific dollar amount – it’s reportedly a $ 10 billion deal.
How does ChatGPT work?
ChatGPT was trained in writing that already exists on the internet up to the year 2021. When you type in your question or prompt, it reacts with lightning speed.
“I am a machine learning model that has been trained on a large dataset of text which allows me to understand and respond to text- based inputs,” it replies when I ask it to explain how it works.
The idea behind this new generative AI is it could reinvent everything from search engines like Google to digital assistants like Alexa and Siri. It could also do most of the heavy lifting on information writing, content creation, customer service chatbots, research, legal documents, and much more.
“( OpenAI) will provide vastly new potential … at a scale and speed which we’ve never seen before, reinventing pretty much everything about our lives and careers,” says Neil Voss, Co- Founder of augmented- reality startup, Anima. Voss uses OpenAI’s system to create AR- based “creatures” that can talk to their owners.
He and many others predict OpenAI’s latest tools will become the most significant since the launch of the smartphone, with potential already being likened to the early days of the internet.
“Very quickly, AI will make not only finding information ( much easier) but understanding it – reshaping it and making it useful – much faster,” Voss explains in an email.
In a follow- up question about how we’ll use ChatGPT and this kind of nextgeneration AI in the next year or two, the program highlighted several applications including health care, “for things like diagnostics, drug discovery, and personalized treatment plans,” and content creation for, “human- like text, audio, creative writing, news articles, video scripts, and more.”
While some worry computers will push people out of jobs, it’s the bots’ last sentence that raises the most serious red flags.
What are the dangers of ChatGPT?
ChatGPT parrots back existing content, and although it “sounds” authoritative, it can be flat- out wrong. ( We all know by now that not everything you read on the internet is true, right?)
AI can’t yet tell fact from fiction, and ChatGPT was trained on data that’s already two years old. If you ask it a timely question, such as what the most recent iPhone model is – it says it’s the 13.
“In the past, AI has been used largely for predictions or categorization. ChatGPT will actually create new articles, news items or blog posts, even school essays, and it’s pretty hard to distinguish between them and real, human- created writing,” Helen Lee Bouygues tells me over email.
Bouygues is the president and founder of the Reboot Foundation, which advocates for critical thinking to combat the rise of misinformation. She’s worried new tech like ChatGPT could spread misinformation or fake news, generate bias, or get used to spread propaganda.
“My biggest concern is that it will make people dumber – particularly young people, while computers get smarter,” Bouygues explains. “Why? Because more and more people will use these tools like ChatGPT to answer questions or generally engage in the world without richer, more reflective kinds of thinking. Take social media. People click, post, and retweet articles and content that they have not read. ChatGPT will make this worse by making it easier for people not to think. Instead, it will be far too easy to have the bot conjure their thoughts and ideas.”
OpenAI’s use and content policies specifically warn against deceptive practices, including; promoting dishonesty, deceiving or manipulating users, or trying to influence politics. It also states that when sharing content, “all users should clearly indicate that it is generated by AI ‘ in a way no one could reasonably miss or misunderstand.’”
But it’s humans we’re talking about. And honesty? Sigh.
Buzzfeed announced Thursday that it will partner with ChatGPT to create content. News site CNET is under fire for using AI to create informational articles in its Money section, without full disclosure and transparency.
A recent survey of 1,000 college students in America by the online magazine Intelligent. com also reports nearly 1 in 3 have used ChatGPT on written assignments, even though most think it’s “cheating.”
New York City and Seattle school districts recently banned ChatGPT from their devices and networks, and many colleges are considering similar steps.
“This isn’t evil. On the other side of this are accomplishments we’ve only been able to dream of, but getting there is going to be difficult. It is up to us to apply that potential to things that are worthwhile, meaningful, and human.” Neil Voss Co- Founder of augmented- reality startup, Anima
How to detect AI written content
In a statement from OpenAI, a spokesperson told us that the company via email that they’re already working on a tool to help identify text generated by ChatGPT. It’s apparently similar to “an algorithmic ‘ watermark,’ or sort of invisible flag embedded into ChatGPT’s writing that can identify its source,” according to CBS.
“We’ve always called for transparency around the use of AI- generated text. Our policies require that users be upfront with their audience when using our API and creative tools like DALL- E and GPT- 3,” OpenAI’s statement reiterates.
A senior at Princeton recently created an app called GPTZero to spot whether AI wrote an essay. But it’s not ready for the masses yet.
I used an AI content detector called Writer, and it spotted most cases of ChatGPT that I fed it. But some fear AI’s ability to mimic humans will move faster than tech’s ability to police it.
Still, the cat’s out of the bag, and there’s no wrestling it back in.
“This isn’t evil,” says Neil Voss. “On the other side of this are accomplishments we’ve only been able to dream of, but getting there is going to be difficult. It is up to us to apply that potential to things that are worthwhile, meaningful, and human.”
When I ask ChatGPT to write a sentence about the ethical implications of ChatGPT in the style of tech journalist Jennifer Jolly, it said, “ChatGPT is a technological tour- de- force, but it also raises important ethical considerations, like how to ensure that this powerful tool is used responsibly and for the greater good.”
I couldn’t have said it better myself.