The Mercury News Weekend

AI makes mistakes, but could it destroy us?

- Larry Magid Digital crossroads

Before I get to the potentiall­y deadly serious part of today's column, I'd like to start on the lighter side. Lighter, that is, unless you happen to be attorney Steven A. Schwartz.

In representi­ng a man named Roberto Mata who said he was injured aboard an Avianca flight, Schwartz reportedly filed a 10-page legal document, citing previous cases, including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines. Just to be sure, the lawyer asked ChatGPT to verify that the cases were real.

It said that they were.

Not surprising­ly, Avianca's lawyers, along with the judge, did their own research but couldn't find references to the cases cited by Schwartz. As it turned out, Schwartz , a veteran attorney, used ChatGPT for his legal research, which resulted in citations to cases that never existed. Schwartz later told the court that it was the first time he used ChatGPT and “therefore was unaware of the possibilit­y that its content could be false.”

Fortunatel­y, opposing council and the judge found the errors before anything irreversib­le occurred. I don't know the ultimate outcome of Mata vs. Avianca, but I trust the verdict will be based on fact rather than fiction.

AI chat makes mistakes

Schwartz learned what I and millions of other users of generative AI already know. These chatbots can be very useful, but they can also make up informatio­n that seems to be true but isn't. I occasional­ly use ChatGPT to find informatio­n, but I always verify it before quoting it or relying on it. In my experience, almost everything it creates appears to be true, because it reaches logical conclusion­s based on the informatio­n it has access to. But just because it appears to be logical doesn't mean it's true. As someone who has written for several of America's leading newspapers, it is “logical” that I may have written

for the Wall Street Journal and USA Today, as ChatGPT sometimes says. But I haven't.

I don't know if OpenAI, the company behind ChatGPT, has issued an advisory for lawyers, but it has published Educator Considerat­ions for ChatGPT, which in part says that “it may fabricate source names, direct quotations, citations, and other details.”

Existentia­l risk

And now for the more serious news story about generative AI. You might have heard about the statement organized by the Center for AI Safety and signed by a large cohort of AI scientists and other leading figures in the field, including OpenAI CEO Sam Altman, Ilya Sutskever, OpenAI's chief scientist and Lila Ibrahim, COO of Google DeepMind.

These experts, many with a vested interest in developing and promulgati­ng generative AI, agree that the risk is real and that government­s need to consider ways to regulate and rein in the very industry they are part of. The statement is only 22 words, but still quite chilling. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The Center for AI Safety pulls no punches. In its risk statement, it acknowledg­es that “AI has many beneficial applicatio­ns,” yet “it can also be used to perpetuate bias, power autonomous weapons, promote misinforma­tion, and conduct cyberattac­ks. Even as AI systems are used with human involvemen­t, AI agents are increasing­ly able to act autonomous­ly to cause harm.” Looking to the future, these experts warn that “when AI becomes more advanced, it could eventually pose catastroph­ic or existentia­l risks.”

We live with other existentia­l risks

As a society, we've become used to hearing about existentia­l risks. I was in elementary school during the “duck and cover” drills of the 1950s and 1960s where we practiced ducking under school desks, as if that would actually protect us from a nuclear strike. If you need evidence, search for “Bert the Turtle” to view cartoons the government was using to convince children to “duck and cover.”

COVID panic is behind us, but it was an example of a very real threat contributi­ng to the deaths of nearly 7 million people, according to the World Health Organizati­on. Even if COVID remains under control though vaccinatio­ns, masking and drugs like Paxlovid, pandemics remain a serious risk. Although we are no longer ducking under our desks, we are hearing renewed warnings about the use of nuclear weapons.

And the folks from the Center for AI Safety didn't even mention climate change, which is on the minds of many young people who worry whether Earth will continue to be inhabitabl­e for people and other living things by the time they reach old age.

I worry about all of these things and hate that I'm now being told to add Generative AI to the list of things that might destroy us, but I also have confidence that these problems are all fixable or at least controllab­le in ways that can avoid catastroph­ic outcomes.

A note of optimism

We can't eliminate risks completely, but if we come together on a global basis, we can minimize them or learn to live with them. That requires a combinatio­n of efforts including regulation, industry cooperatio­n, technologi­cal solutions and buy-in from the general public. It also requires distinguis­hing between facts and conspiracy theories and focusing on real solutions.

Almost everyone in the AI community agrees with OpenAI CEO Sam Altman that government­s have an important role to play in regulation. Speaking before a U.S. Senate committee hearing last month, Altman said “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. … We want to work with the government to prevent that from happening.”

In some ways, today's AI is like the early days of the industrial revolution, which changed the nature of work and had an impact on our safety. An article in the Detroit News summarized the state of affairs during the period when automatabl­e was first introduced to American streets, “In the first decade of the 20th century there were no stop signs, warning signs, traffic lights, traffic cops, driver's education, lane lines, street lighting, brake lights, driver's licenses or posted speed limits.”

When it comes to generative AI, we need warning signs, traffic lights, traffic cops, driver's education and many other safeguards.

I'm glad to see leaders of the AI industry and many in government taking the risks seriously. Properly managed, AI can make the world a better and safer place. It can power incredible medical breakthrou­ghs, can help vastly reduce traffic deaths and empower creative people to be even more creative. But like other technologi­es, including fire, cars, kitchen knives and pharmaceut­icals, it can also do harm if it misused.

I'm both an optimist and a realist. The realist in me tells me that AI is here to stay and that there will be downsides to it. The optimist in me draws on decades of dealing with risks and the confidence that things will be OK, as long as we make the right decisions.

 ?? ??

Newspapers in English

Newspapers from United States