Lifesaver or job killer? Why AI tools like ChatGPT are so polarizing
If you listen to its boosters, artificial intelligence is poised to revolutionize nearly every facet of life for the better. A tide of new, cutting-edge tools is already demolishing language barriers, automating tedious tasks, detecting cancer and comforting the lonely.
A growing chorus of doomsayers, meanwhile, agrees AI is poised to revolutionize life — but for the worse. It is absorbing and reflecting society’s worst biases, threatening the livelihoods of artists and white-collar workers, and perpetuating scams and disinformation, they say.
The latest wave of AI has the tech industry and its critics in a frenzy. Socalled generative AI tools such as ChatGPT, Replika and Stable Diffusion, which use special software to create humanlike text, images, voices and videos, seem to be rapidly blurring the lines between human and machine.
As sectors ranging from education to health care to insurance to marketing consider how AI might reshape their businesses, a crescendo of hype has given rise to wild hopes and desperate fears. Fueling both is the sense that machines are getting too smart, too fast — and could someday slip beyond our control.
“What nukes are to the physical world,” tech ethicist Tristan Harris recently proclaimed, “AI is to everything else.”
The benefits and downsides are real, experts say. But for now, the promise and perils of generative AI may be more modest than headlines make them seem.
“The combination of fascination and fear, or euphoria and alarm, is something that has greeted every new technological wave since the first all-digital computer,” said Margaret O’Mara, a professor of history at the University of Washington. As with past technological shifts, she added, today’s AI models could automate certain everyday tasks, obviate some types of jobs, solve some problems and exacerbate others, but “it isn’t going to be the singular force that changes everything.”
Artificial intelligence and chatbots are not new. Various forms of AI already power TikTok feeds, Spotify’s personalized playlists, Tesla’s Autopilot systems, pharmaceutical drug development and facial recognition systems used in criminal investigations. Simple computer chatbots have been around since the 1960s and are widely used for online customer service.
What’s new is the fervor surrounding generative AI, a category of AI tools that draws on oceans of data to create their own content — art, songs, essays, even computer code — rather than simply analyzing or recommending content created by humans. While the technology behind generative AI has been brewing for years in research labs, startups and companies have only recently begun releasing them to the public.
Free tools such as OpenAI’s ChatGPT chatbot and DALL-E 2 image generator have captured imaginations as people share novel ways of using them and marvel at the results. Their popularity has the industry’s giants, including Microsoft, Google and Facebook, racing to incorporate similar tools into some of their most popular products, from search engines to word processors. But it seems for every success story, there’s a nightmare scenario.
ChatGPT’s facility for drafting professional sounding, grammatically correct emails has made it a daily timesaver for many, empowering people who struggle with literacy. But Vanderbilt University used ChatGPT to write a collegewide email offering generic condolences in response to a shooting at Michigan State, enraging students.
ChatGPT and other AI language tools can also write computer code, devise games, and distill insights from data sets. But there’s no guarantee that code will work, the games will make sense or the insights will be correct. Microsoft’s Bing AI bot has already been found to give false answers to search queries, and early iterations became combative with users. A game that ChatGPT seemingly invented turned out to be a copy of a game that already existed.
GitHub Copilot, an AI coding tool from OpenAI and Microsoft, has quickly become indispensable to many software developers, predicting their next lines of code and suggesting solutions to common problems. Yet its solutions aren’t always correct, and it can introduce faulty code if developers aren’t careful.
Thanks to biases in the data it was trained on, ChatGPT’s outputs can be not just inaccurate but also offensive. In one infamous example, ChatGPT composed a short software program that suggested that an easy way to tell whether someone would make a good scientist was to simply check whether they are both white and male.