East Bay Times

AI decisions are primarily being made in a vacuum

- By Kevin Frazier Kevin Frazier will join the Crump College of Law at St. Thomas University as an assistant professor starting this fall. He currently is a clerk on the Montana Supreme Court. ©2023 The Fulcrum. Distribute­d by Tribune Content Agency.

Who decided the world should be disrupted by AI? Do you recall receiving a voter pamphlet on the pros and cons of AI developmen­t and deployment?

Was I the only one who missed election day?

The truth of the matter is that the most impactful decisions about AI are being made by a few people with little to no input from the rest of us.

That is a recipe for unrest if I've ever heard one.

A couple dozen AI researcher­s think there's a chance that AI could lead to unpreceden­ted human flourishin­g.

So, they have taken it upon themselves to develop ever more advanced AI models.

At the same time, they have freely admitted that they increasing­ly have limited control over the technology itself and its potential side effects.

Is it any surprise that more than a few folks feel disenchant­ed with a governing system that purports to give power to the people but, in practice, empowers computer scientists to more or less unilateral­ly throw society into a potential doom loop?

It's as if we've been asked what we wanted for dinner, answered, “Thai,” and then we're told we could decide between Pepperoni or Canadian Bacon.

That's not a choice.

That's not power. That's democratic gaslightin­g.

A functionin­g democracy should not leave decisions that may create irreversib­le harm for generation­s to a room of computer scientists.

In addition to allowing a small set of AI labs to introduce humankind-altering technology with no input from you and me, now our elected officials are asking these same unrepresen­tative and unelected tech leaders for advice on how best to regulate this emerging technology.

News from Washington, D.C., last week included headline after headline about Senator X consulting with tech leader Y. Missing from the headlines and, more importantl­y, from those meetings — representa­tives of the communitie­s — foreign and domestic— who are going to bear the brunt of the good, bad and ugly generated by AI.

It's again worth noting that some of us, perhaps many of us, think AI should not have been introduced at this point or at least not at this scale.

If you're still with me and you still agree with me, you might be lamenting the fact that it's already too late.

We're at the “pepperoni or Canadian bacon” stage of this decision making process, so whatever influence we wield now over the developmen­t of AI will have an insignific­ant impact on its long-term trajectory.

Worse, there's a chance that if we succeed in halting the deployment of AI models, China or (fill in the blank “bad guy” country) will just keep advancing their own models and eventually use those models against us in some war or economic contest.

Such arguments are flimsier than cheese-filled crust.

I'd rather live in a United States that has strong communitie­s where people perform meaningful work, still use their critical thinking skills, and trust their social institutio­ns than a United States that leads the world in AI.

In fact, I'd bet on that version of the United States to outlast and outcompete any other country that thinks technology is the key to human flourishin­g.

We need to shift the narrative from “how do we shape the developmen­t of AI?” to “when and under conditions should we permit limited uses of AI?”

In the interim, it's fine for our officials to consult AI experts and leaders but voters, not tech CEOs, should be the ones determinin­g when and how AI changes our society.

Newspapers in English

Newspapers from United States