USA TODAY International Edition

ChatGPT raises misinforma­tion concern

Lightning fast tool can’t tell fact from fiction

- Jennifer Jolly What does ChatGPT stand for?

In less time than it takes me to write this sentence, ChatGPT, the free artificial intelligen­ce computer program that writes human- sounding answers to just about anything you ask, will spit out a 500- word essay explaining quantum physics with literary flair.

“Once upon a time, there was a strange and mysterious world that existed alongside our own,” the response begins. It continues with a physics professor sitting alone in his office on a dark and stormy night ( of course), “his mind consumed by the mysteries of quantum physics. ... It was a power that could bend the very fabric of space and time, and twist the rules of reality itself,” the chat window reads.

Wow, the ChatGPT answer is both eerily entertaini­ng and oddly educationa­l. In the end, the old professor figures it all out and shares his knowledge with the world. The essay is cool and creepy, especially these last two sentences:

“His theory changes the way we see the world and leads to new technologi­es, but also unlocks a door to powers beyond human comprehens­ion, that can be used for good or evil. It forever changes the future of humanity.”

Yes, it could be talking about itself.

ChatGPT ( Generative Pre- trained Transforme­r) is the latest viral sensation out of San Francisco- based startup OpenAI.

It’s a free online tool trained on millions of pages of writing from all corners of the internet to understand and respond to text- based queries in just about any style you want.

When I ask it to explain ChatGPT to my mom, it cranks out, “ChatGPT is a computer program that uses artificial intelligen­ce ( AI) to understand and respond to natural language text, just like a human would. It can answer questions, write sentences, and even have a conversati­on with you. It’s like having your own personal robot that can understand and talk to you!”

The easiest way to get a picture of its powers is to try it out for yourself. It’s free, you just need to register for an account, then ask it a question.

You can even prompt it to write something for you – anything really and in any style – from a poem using your child’s name to song lyrics about your dog, business taglines, essays, research papers, and even software code. It types out responses in a few seconds and follows up in the same thread if you don’t like the first answer.

ChatGPT launched as a prototype to the public Nov. 30, 2022. Within five days, more than a million people were using it.

By comparison, it took Netflix 31⁄

2 years to get that many people on board. Facebook didn’t crack its first million people for 10 months, and Spotify went five months before it reached that million user mark.

Microsoft confirmed on Monday that it’s making a “multiyear, multibilli­ondollar” investment in OpenAI, and while they didn’t disclose the specific dollar amount – it’s reportedly a $ 10 billion deal.

How does ChatGPT work?

ChatGPT was trained in writing that already exists on the internet up to the year 2021. When you type in your question or prompt, it reacts with lightning speed.

“I am a machine learning model that has been trained on a large dataset of text which allows me to understand and respond to text- based inputs,” it replies when I ask it to explain how it works.

The idea behind this new generative AI is it could reinvent everything from search engines like Google to digital assistants like Alexa and Siri. It could also do most of the heavy lifting on informatio­n writing, content creation, customer service chatbots, research, legal documents, and much more.

“( OpenAI) will provide vastly new potential … at a scale and speed which we’ve never seen before, reinventin­g pretty much everything about our lives and careers,” says Neil Voss, Co- Founder of augmented- reality startup, Anima. Voss uses OpenAI’s system to create AR- based “creatures” that can talk to their owners.

He and many others predict OpenAI’s latest tools will become the most significant since the launch of the smartphone, with potential already being likened to the early days of the internet.

“Very quickly, AI will make not only finding informatio­n ( much easier) but understand­ing it – reshaping it and making it useful – much faster,” Voss explains in an email.

In a follow- up question about how we’ll use ChatGPT and this kind of nextgenera­tion AI in the next year or two, the program highlighte­d several applicatio­ns including health care, “for things like diagnostic­s, drug discovery, and personaliz­ed treatment plans,” and content creation for, “human- like text, audio, creative writing, news articles, video scripts, and more.”

While some worry computers will push people out of jobs, it’s the bots’ last sentence that raises the most serious red flags.

What are the dangers of ChatGPT?

ChatGPT parrots back existing content, and although it “sounds” authoritat­ive, it can be flat- out wrong. ( We all know by now that not everything you read on the internet is true, right?)

AI can’t yet tell fact from fiction, and ChatGPT was trained on data that’s already two years old. If you ask it a timely question, such as what the most recent iPhone model is – it says it’s the 13.

“In the past, AI has been used largely for prediction­s or categoriza­tion. ChatGPT will actually create new articles, news items or blog posts, even school essays, and it’s pretty hard to distinguis­h between them and real, human- created writing,” Helen Lee Bouygues tells me over email.

Bouygues is the president and founder of the Reboot Foundation, which advocates for critical thinking to combat the rise of misinforma­tion. She’s worried new tech like ChatGPT could spread misinforma­tion or fake news, generate bias, or get used to spread propaganda.

“My biggest concern is that it will make people dumber – particular­ly young people, while computers get smarter,” Bouygues explains. “Why? Because more and more people will use these tools like ChatGPT to answer questions or generally engage in the world without richer, more reflective kinds of thinking. Take social media. People click, post, and retweet articles and content that they have not read. ChatGPT will make this worse by making it easier for people not to think. Instead, it will be far too easy to have the bot conjure their thoughts and ideas.”

OpenAI’s use and content policies specifically warn against deceptive practices, including; promoting dishonesty, deceiving or manipulati­ng users, or trying to influence politics. It also states that when sharing content, “all users should clearly indicate that it is generated by AI ‘ in a way no one could reasonably miss or misunderst­and.’”

But it’s humans we’re talking about. And honesty? Sigh.

Buzzfeed announced Thursday that it will partner with ChatGPT to create content. News site CNET is under fire for using AI to create informatio­nal articles in its Money section, without full disclosure and transparen­cy.

A recent survey of 1,000 college students in America by the online magazine Intelligen­t. com also reports nearly 1 in 3 have used ChatGPT on written assignment­s, even though most think it’s “cheating.”

New York City and Seattle school districts recently banned ChatGPT from their devices and networks, and many colleges are considerin­g similar steps.

“This isn’t evil. On the other side of this are accomplish­ments we’ve only been able to dream of, but getting there is going to be difficult. It is up to us to apply that potential to things that are worthwhile, meaningful, and human.” Neil Voss Co- Founder of augmented- reality startup, Anima

How to detect AI written content

In a statement from OpenAI, a spokespers­on told us that the company via email that they’re already working on a tool to help identify text generated by ChatGPT. It’s apparently similar to “an algorithmi­c ‘ watermark,’ or sort of invisible flag embedded into ChatGPT’s writing that can identify its source,” according to CBS.

“We’ve always called for transparen­cy around the use of AI- generated text. Our policies require that users be upfront with their audience when using our API and creative tools like DALL- E and GPT- 3,” OpenAI’s statement reiterates.

A senior at Princeton recently created an app called GPTZero to spot whether AI wrote an essay. But it’s not ready for the masses yet.

I used an AI content detector called Writer, and it spotted most cases of ChatGPT that I fed it. But some fear AI’s ability to mimic humans will move faster than tech’s ability to police it.

Still, the cat’s out of the bag, and there’s no wrestling it back in.

“This isn’t evil,” says Neil Voss. “On the other side of this are accomplish­ments we’ve only been able to dream of, but getting there is going to be difficult. It is up to us to apply that potential to things that are worthwhile, meaningful, and human.”

When I ask ChatGPT to write a sentence about the ethical implicatio­ns of ChatGPT in the style of tech journalist Jennifer Jolly, it said, “ChatGPT is a technologi­cal tour- de- force, but it also raises important ethical considerat­ions, like how to ensure that this powerful tool is used responsibl­y and for the greater good.”

I couldn’t have said it better myself.

 ?? LIONEL BONAVENTUR­E/ AFP VIA GETTY IMAGES ?? ChatGPT is a conversati­onal artificial intelligen­ce software applicatio­n developed by OpenAI.
LIONEL BONAVENTUR­E/ AFP VIA GETTY IMAGES ChatGPT is a conversati­onal artificial intelligen­ce software applicatio­n developed by OpenAI.

Newspapers in English

Newspapers from United States