The Oklahoman

Does GPT-4 offer future glimpse of the internet?

New AI model ‘one step closer to life imitating art’

- Kelvin Chan

LONDON – The company behind the ChatGPT chatbot has rolled out its latest artificial intelligen­ce model, GPT-4, in the next step for a technology that’s caught the world’s attention.

The new system can figure out tax deductions and answer questions like a Shakespear­ean pirate, for example, but it still “hallucinat­es” facts and makes reasoning errors.

Here’s a look at San Franciscob­ased startup OpenAI’s latest improvemen­t on the generative AI models that can spit out readable text and unique images:

What’s new?

OpenAI says GPT-4 “exhibits human-level performanc­e.” It’s much more reliable, creative and can handle “more nuanced instructio­ns” than its predecesso­r system, GPT-3.5, which ChatGPT was built on, OpenAI said in its announceme­nt.

In an online demo Tuesday, OpenAI President Greg Brockman ran through some scenarios that showed off GPT-4’s capabiliti­es that appeared to show it’s a radical improvemen­t on previous versions.

He demonstrat­ed how the system could quickly come up with the proper income tax deduction after being fed reams of tax code – something he couldn’t figure himself.

“It’s not perfect, but neither are you. And together it’s this amplifying tool that lets you just reach new heights,” Brockman said.

Why does it matter?

Generative AI technology like GPT-4 could be the future of the internet, at least according to Microsoft, which has invested at least $1 billion in OpenAI and made a splash by integratin­g AI chatbot tech into its Bing browser.

It’s part of a new generation of machine-learning systems that can converse, generate readable text on demand and produce novel images and video based on what they’ve learned from a vast database of digital books and online text.

These new AI breakthrou­ghs have the potential to transform the internet search business long dominated by Google, which is trying to catch up with its own AI chatbot, and numerous profession­s.

“With GPT-4, we are one step closer to life imitating art,” said Mirella Lapata, professor of natural language processing at the University of Edinburgh. She referred to the TV show “Black Mirror,” which focuses on the dark side of technology.

“Humans are not fooled by the AI in ‘Black Mirror’ but they tolerate it,” Lapata said. “Likewise, GPT-4 is not perfect, but paves the way for AI being used as a commodity tool on a daily basis.”

What exactly are the improvemen­ts?

GPT-4 is a “large multimodal model,” which means it can be fed both text and images that it uses to come up with answers.

In one example posted on OpenAI’s website, GPT-4 is asked, “What is unusual about this image?” It’s answer: “The unusual thing about this image is that a man is ironing clothes on an ironing board attached to the roof of a moving taxi.”

GPT-4 is also “steerable,” which means that instead of getting an answer in ChatGPT’s “classic” fixed tone and verbosity, users can customize it by asking for responses in the style of a Shakespear­ean pirate, for instance.

In his demo, Brockman asked both GPT-3.5 and GPT-4 to summarize in one sentence an article explaining the difference between the two systems. The catch was that every word had to start with the letter G.

GPT-3.5 didn’t even try, spitting out a normal sentence. The newer version swiftly responded: “GPT-4 generates groundbrea­king, grandiose gains, greatly galvanizin­g generalize­d AI goals.”

How well does it work?

ChatGPT can write silly poems and songs or quickly explain just about anything found on the internet.

It also gained notoriety for results that could be way off, such as confidently providing a detailed but false account of the Super Bowl game days before it took place, or being disparagin­g to users.

OpenAI acknowledg­ed that GPT-4 still has limitation­s and warned users to be careful. GPT-4 is “still not fully reliable” because it “hallucinat­es” facts and makes reasoning errors, it said.

“Great care should be taken when using language model outputs, particular­ly in high-stakes contexts,” the company said, though it added that hallucinat­ions have been sharply reduced. Experts also advised caution. “We should remember that language models such as GPT-4 do not think in a human-like way, and we should not be misled by their fluency with language,” said Nello Cristianin­i, professor of artificial intelligen­ce at the University of Bath.

Another problem is that GPT-4 does not know much about anything that happened after September 2021, because that was the cutoff date for the data it was trained on.

Are there safeguards?

OpenAI says GPT-4’s improved capabiliti­es “lead to new risk surfaces” so it has improved safety by training it to refuse requests for sensitive or “disallowed” informatio­n.

It’s less likely to answer questions on, for example, how to build a bomb or buy cheap cigarettes.

Still, OpenAI cautions that while “eliciting bad behavior” from GPT is harder, “doing so is still possible.”

 ?? LIONEL BONAVENTUR­E/AFP VIA GETTY IMAGES ?? GPT-4’s arrival has been highly anticipate­d since ChatGPT burst onto the scene in late November, wowing users with its capabiliti­es that were based on an older version of OpenAI’s technology.
LIONEL BONAVENTUR­E/AFP VIA GETTY IMAGES GPT-4’s arrival has been highly anticipate­d since ChatGPT burst onto the scene in late November, wowing users with its capabiliti­es that were based on an older version of OpenAI’s technology.

Newspapers in English

Newspapers from United States