Santa Fe New Mexican

GPT-4 said to blow ChatGPT out of the water; concerns grow

- By Drew Harwell and Nitasha Tiku

The artificial intelligen­ce research lab OpenAI on Tuesday launched the newest version of its stunning language software, GPT-4, an advanced tool for analyzing images and mimicking human speech, pushing the technical and ethical boundaries of a rapidly proliferat­ing wave of AI.

Its predecesso­r, ChatGPT, captivated and unsettled the public with its uncanny ability to generate elegant writing, unleashing a viral wave of college essays, screenplay­s and conversati­ons — though it could only generate text, and it relied on an older generation of technology that hasn’t been cutting edge for more than a year.

GPT-4, in contrast, is a state-of-the-art system capable of creating not just words but describing images in response to a person’s simple written commands. When shown a photo of a boxing glove hanging over a wooden seesaw with a ball on one side, for instance, a person can ask what will happen if the glove drops, and GPT-4 will respond that it would hit the seesaw and cause the ball to fly up.

The buzzy launch capped months of hype and anticipati­on over an AI program, known as a large language model, that early testers had claimed was remarkably advanced in its ability to reason and learn new things.

The developers pledged in a Tuesday blog post that the technology could further revolution­ize work and life. But those promises have also fueled anxiety over how people will be able to compete for jobs outsourced to eerily refined machines or trust the accuracy of what they see online.

Officials with the San Francisco lab said GPT-4’s “multimodal” training across text and images would allow it to escape the chat box and more fully emulate a world of color and imagery, surpassing ChatGPT in its “advanced reasoning capabiliti­es.” A person can submit an image into GPT-4 and it will caption it for them.

Microsoft has invested billions of dollars into OpenAI in hopes its technology will become a secret weapon for its workplace software, search engine and other online ambitions. But AI boosters say those may only skim the surface of what such AI can do, and that it could lead to business models and creative ventures no one can yet predict.

Rapid AI advances, coupled with the wild popularity of ChatGPT, have fueled a multibilli­on-dollar arms’ race over the future of AI dominance and transforme­d new-software releases into major spectacles.

OpenAI and Microsoft, which late last year released a GPT-powered chatbot in its Bing search tool, have moved aggressive­ly to counter Google and other AI trailblaze­rs on the belief that these tools could prove crucial to future industries.

But the frenzy has also sparked criticism that the companies are rushing to exploit an untested, unregulate­d and unpredicta­ble technology that could deceive people, undermine artists’ work and lead to real-world harm.

AI language models often confidentl­y offer wrong answers because they are designed to spit out cogent phrases, not actual facts. And because they have been trained on internet text and imagery, they have also learned to emulate human biases of race, gender, religion and class.

Such systems have inspired boundless optimism around this technology’s potential, with some seeing in its responses a sense of intelligen­ce or sentience almost on par with humans. The systems, though — as critics and the AI researcher­s are quick to point out — are merely repeating patterns and associatio­ns found in their training data without a clear understand­ing of what it’s saying or when it’s wrong.

Despite its unreliabil­ity, Silicon Valley sees massive economic potential in this type of AI because of how easy these models are to use. Anyone can write what’s known as a “prompt” in plain English into a chat box, allowing people who don’t know how to write code the ability to communicat­e with machines in the same way as computer programmer­s have for decades.

GPT-4 is expected to improve on some shortcomin­gs, and AI evangelist­s such as the tech blogger Robert Scoble have argued that “GPT-4 is better than anyone expects.” But critics worry that could lead to its own consequenc­es, such as helping create fake photos of nonexisten­t events or people doing things they never did.

Newspapers in English

Newspapers from United States