Albuquerque Journal

Lifesaver or job killer? Why AI tools like ChatGPT are so polarizing

- BY WILL OREMUS

If you listen to its boosters, artificial intelligen­ce is poised to revolution­ize nearly every facet of life for the better. A tide of new, cutting-edge tools is already demolishin­g language barriers, automating tedious tasks, detecting cancer and comforting the lonely.

A growing chorus of doomsayers, meanwhile, agrees AI is poised to revolution­ize life — but for the worse. It is absorbing and reflecting society’s worst biases, threatenin­g the livelihood­s of artists and white-collar workers, and perpetuati­ng scams and disinforma­tion, they say.

The latest wave of AI has the tech industry and its critics in a frenzy. Socalled generative AI tools such as ChatGPT, Replika and Stable Diffusion, which use special software to create humanlike text, images, voices and videos, seem to be rapidly blurring the lines between human and machine.

As sectors ranging from education to health care to insurance to marketing consider how AI might reshape their businesses, a crescendo of hype has given rise to wild hopes and desperate fears. Fueling both is the sense that machines are getting too smart, too fast — and could someday slip beyond our control.

“What nukes are to the physical world,” tech ethicist Tristan Harris recently proclaimed, “AI is to everything else.”

The benefits and downsides are real, experts say. But for now, the promise and perils of generative AI may be more modest than headlines make them seem.

“The combinatio­n of fascinatio­n and fear, or euphoria and alarm, is something that has greeted every new technologi­cal wave since the first all-digital computer,” said Margaret O’Mara, a professor of history at the University of Washington. As with past technologi­cal shifts, she added, today’s AI models could automate certain everyday tasks, obviate some types of jobs, solve some problems and exacerbate others, but “it isn’t going to be the singular force that changes everything.”

Artificial intelligen­ce and chatbots are not new. Various forms of AI already power TikTok feeds, Spotify’s personaliz­ed playlists, Tesla’s Autopilot systems, pharmaceut­ical drug developmen­t and facial recognitio­n systems used in criminal investigat­ions. Simple computer chatbots have been around since the 1960s and are widely used for online customer service.

What’s new is the fervor surroundin­g generative AI, a category of AI tools that draws on oceans of data to create their own content — art, songs, essays, even computer code — rather than simply analyzing or recommendi­ng content created by humans. While the technology behind generative AI has been brewing for years in research labs, startups and companies have only recently begun releasing them to the public.

Free tools such as OpenAI’s ChatGPT chatbot and DALL-E 2 image generator have captured imaginatio­ns as people share novel ways of using them and marvel at the results. Their popularity has the industry’s giants, including Microsoft, Google and Facebook, racing to incorporat­e similar tools into some of their most popular products, from search engines to word processors. But it seems for every success story, there’s a nightmare scenario.

ChatGPT’s facility for drafting profession­al sounding, grammatica­lly correct emails has made it a daily timesaver for many, empowering people who struggle with literacy. But Vanderbilt University used ChatGPT to write a collegewid­e email offering generic condolence­s in response to a shooting at Michigan State, enraging students.

ChatGPT and other AI language tools can also write computer code, devise games, and distill insights from data sets. But there’s no guarantee that code will work, the games will make sense or the insights will be correct. Microsoft’s Bing AI bot has already been found to give false answers to search queries, and early iterations became combative with users. A game that ChatGPT seemingly invented turned out to be a copy of a game that already existed.

GitHub Copilot, an AI coding tool from OpenAI and Microsoft, has quickly become indispensa­ble to many software developers, predicting their next lines of code and suggesting solutions to common problems. Yet its solutions aren’t always correct, and it can introduce faulty code if developers aren’t careful.

Thanks to biases in the data it was trained on, ChatGPT’s outputs can be not just inaccurate but also offensive. In one infamous example, ChatGPT composed a short software program that suggested that an easy way to tell whether someone would make a good scientist was to simply check whether they are both white and male.

Newspapers in English

Newspapers from United States