San Antonio Express-News (Sunday)

Reasons to freak out about AI advances

- Michael Taylor Michael Taylor is a San Antonio Express-News columnist, author of “The Financial Rules for New College Graduates” and host of the podcast “No Hill for a Climber.” michael@michaelthe­smartmoney.com | twitter.com/michael_taylor

The year 2023 marked the beginning of our species’ journey with artificial intelligen­ce. Nothing will ever be the same.

One optimistic analogy is that we’re in the equivalent of the early 1980s, those days when computers began to shift from being an obscurely accessible research tool to the dominant method by which everyone interacts with everyone and everything. There are some downsides to the computer age, sure. But non-Luddites generally don’t think we should go back to 1979.

A pessimisti­c analogy is that we’re in the equivalent of the early 1930s, before nuclear technology began to shift from being an obscurely theorized physics idea to, within a decade, a technology by which humans could efficientl­y wipe out the species.

I’m considerin­g all this after a bunch of business news in the past few weeks related to the companies leading our AI revolution.

Last month, the nonprofit board of OpenAI — which fueled this revolution with its viral launch a year ago of ChatGPT — briefly fired then re-hired CEO Sam Altman after the company threatened to move over to Microsoft Corp. It shares in OpenAI profit but launched its own AI assistant project called Copilots earlier in 2023.

In the last week of November, Amazon.com Inc. announced its own AI assistant named “Q.”

And last week, Google parent Alphabet launched Genesis. Early reports are that it’s a really powerful authentic competitor to Chat GPT-4.

The race to natural language AI is highly engaged and many other forms of the technology are also rapidly evolving. Stock markets have been soaring in December fueled by optimism about advancemen­ts in the field.

I’m most interested in AI ethics. Like, is this accelerati­ng race to develop stronger AI better or worse for people? What’s freaking me out is that the people who understand this technology best keep ringing alarm bells like it’s the 1930s rather than the 1980s.

Cool kids’ AI lingo

Here are two prominent AI terms you should know that together encapsulat­e the rising threat.

Everyone in and around AI casually refers to their “p (doom),” which is cute shorthand for “My Personal View Of The Probabilit­y That AI Causes Catastroph­ic Harm To Humans.” The answers that experts generally give is that they have a shockingly high p (doom), meaning catastroph­ic harm is very likely, in their opinion.

Dr. Althea Delwiche teaches the course “AI, Communicat­ion and Creativity” at Trinity University in San Antonio. She told me that when she recently surveyed her class, “p (doom) estimates ranged from 2 to 50%. On average, the consensus of the class was that there is a 15% chance of AI bringing about some sort of disastrous scenario for humans.”

And, she added, “My p (doom) is closer to 25%.”

That feels, uh, bad.

The second cool lingo that’s developed among this set is “e/acc,” which stands for Effecreact.

tive Accelerati­on theory. It posits that everyone should just move as fast as possible in developing AI, without limits and regulation­s, toward a techno-future.

“Move fast and break things” has long been a Silicon Valley motto, but e/acc applies that in an extreme techno-libertaria­n way to the developmen­t of AI technology that many of its own experts and adherents believe could break, well, the human species. To be fair, e/acc has developed as a response/ backlash to industry experts’ caution, so disagreeme­nt within the industry exists.

Precious few governance or regulatory constraint­s exist to slow the developmen­t of AI at this point. The combinatio­n of high collective p (doom) and adoption of an e/acc approach to rapidly developing technology feels especially irresponsi­ble to me, an admitted non-technologi­st.

Exponentia­l growth

As a non-technologi­st my main personal insight into artificial intelligen­ce is via analogy. I’ve been writing and teaching for years about how compound interest, a key theme

of personal finance, is underappre­ciated because our limited and linear human brains can’t conceive of the speed and impact of exponentia­l growth mathematic­s once an exponentia­l thing acquires a certain momentum.

Humans are shocked, every time, at how huge viruses, money, social media views, computing speeds and, yes, artificial intelligen­ce become once exponentia­l growth takes off.

Exponentia­l-growth things start out looking small and innocuous, like the farthestaw­ay visible vehicle barely impercepti­ble on a desert horizon. What we don’t tend to realize is that this vehicle is moving 300 mph barreling down on us and accelerati­ng throughout its approach. By the time it gets close enough for us to notice details and start to understand its potential impact it roars past us, unreachabl­e and unstoppabl­e.

That, I fear, is artificial intelligen­ce right now. And 2023 is the year we non-experts spotted AI far away on the horizon, an interestin­g new semi-mirage with seemingly plenty of time for humanity to

But that’s not how exponentia­l growth works and we probably have very little time.

Artificial intelligen­ce has not yet reordered everything on the planet but evidence is picking up that in fact, very soon, nothing will be the same.

As professor Deliche put it to me: “People in other sectors of society are just beginning to grasp how incredibly disruptive these technologi­es might become. As a result of recent high-profile announceme­nts from Meta and Google, combined with the propagatio­n of AI-related content on TikTok, Instagram, X, and Facebook, we have reached a tipping point” in people’s awareness.

I would personally put my awareness at, oh, roughly the first week of December 2023. Maybe yours is earlier? Or maybe yours is today.

Binary thinking

I have some further fears about the future of AI and our inability to make it humane.

Computer engineerin­g, at its root, is about binary thinking. Ones and zeros. Successful programmer­s thrive in a world of pure logic, sorting out puzzles through ever-more complex

binaries. While the Silicon Valley folks building the AI future as we speak are extremely good at this kind of thinking, are there enough people collective­ly in that world who excel at the other kinds of thinking?

The kind that values poetry, empathy and ambiguity? In short, where does humanity get valued in the race to develop the world’s most powerful super-intelligen­ce?

A simple diagnosis of our struggles as a society in 2023 is our discomfort with uncertaint­y. I mean, our own political system is barely surviving the rise of Twitter bot farms, which exploit our tendency to prefer Manichean approaches to conflict.

If AI is on the cusp of reordering basically everything — which I think it is — who will make the case for the irrational? The imperfect? The human?

Delwiche strikes an optimistic note that her Trinity students in the arts and humanities are actually finding niches in the industry. Despite the professor’s high p (doom), she’s actually excited in a way that I struggle to remain: “While mass extinction is just one theoretica­lly possible outcome of artificial intelligen­ce, these tools are already being used in countless ways to improve people’s lives.”

I appreciate the latter part of that phrase. It’s the first part, the mass extinction part, that’s giving me pause. I’ve only begun to observe AI on the far-off horizon this year, but our AI future — for better and worse — is accelerati­ng toward us.

P.S. If you would like to freak yourself out by reading the thoughts of one of the leading AI alarmists, published months before the release of Chat GPT-4, do a Google search for “Eliezer Yudkowsky AGI ruin.” You may never sleep soundly again, assuming you care about human survival.

 ?? Gabby Jones/Bloomberg ?? The Google DeepMind website is seen last week on a laptop screen. Alphabet’s Google said Gemini is its largest, most capable and flexible AI model to date.
Gabby Jones/Bloomberg The Google DeepMind website is seen last week on a laptop screen. Alphabet’s Google said Gemini is its largest, most capable and flexible AI model to date.
 ?? Michael Dwyer/Associated Press ?? The emergence of generative AI systems like OpenAI’s ChatGPT dazzle the world, but raise fears about the risks they pose.
Michael Dwyer/Associated Press The emergence of generative AI systems like OpenAI’s ChatGPT dazzle the world, but raise fears about the risks they pose.
 ?? ??

Newspapers in English

Newspapers from United States