San Antonio Express-News (Sunday)
Reasons to freak out about AI advances
The year 2023 marked the beginning of our species’ journey with artificial intelligence. Nothing will ever be the same.
One optimistic analogy is that we’re in the equivalent of the early 1980s, those days when computers began to shift from being an obscurely accessible research tool to the dominant method by which everyone interacts with everyone and everything. There are some downsides to the computer age, sure. But non-Luddites generally don’t think we should go back to 1979.
A pessimistic analogy is that we’re in the equivalent of the early 1930s, before nuclear technology began to shift from being an obscurely theorized physics idea to, within a decade, a technology by which humans could efficiently wipe out the species.
I’m considering all this after a bunch of business news in the past few weeks related to the companies leading our AI revolution.
Last month, the nonprofit board of OpenAI — which fueled this revolution with its viral launch a year ago of ChatGPT — briefly fired then re-hired CEO Sam Altman after the company threatened to move over to Microsoft Corp. It shares in OpenAI profit but launched its own AI assistant project called Copilots earlier in 2023.
In the last week of November, Amazon.com Inc. announced its own AI assistant named “Q.”
And last week, Google parent Alphabet launched Genesis. Early reports are that it’s a really powerful authentic competitor to Chat GPT-4.
The race to natural language AI is highly engaged and many other forms of the technology are also rapidly evolving. Stock markets have been soaring in December fueled by optimism about advancements in the field.
I’m most interested in AI ethics. Like, is this accelerating race to develop stronger AI better or worse for people? What’s freaking me out is that the people who understand this technology best keep ringing alarm bells like it’s the 1930s rather than the 1980s.
Cool kids’ AI lingo
Here are two prominent AI terms you should know that together encapsulate the rising threat.
Everyone in and around AI casually refers to their “p (doom),” which is cute shorthand for “My Personal View Of The Probability That AI Causes Catastrophic Harm To Humans.” The answers that experts generally give is that they have a shockingly high p (doom), meaning catastrophic harm is very likely, in their opinion.
Dr. Althea Delwiche teaches the course “AI, Communication and Creativity” at Trinity University in San Antonio. She told me that when she recently surveyed her class, “p (doom) estimates ranged from 2 to 50%. On average, the consensus of the class was that there is a 15% chance of AI bringing about some sort of disastrous scenario for humans.”
And, she added, “My p (doom) is closer to 25%.”
That feels, uh, bad.
The second cool lingo that’s developed among this set is “e/acc,” which stands for Effecreact.
tive Acceleration theory. It posits that everyone should just move as fast as possible in developing AI, without limits and regulations, toward a techno-future.
“Move fast and break things” has long been a Silicon Valley motto, but e/acc applies that in an extreme techno-libertarian way to the development of AI technology that many of its own experts and adherents believe could break, well, the human species. To be fair, e/acc has developed as a response/ backlash to industry experts’ caution, so disagreement within the industry exists.
Precious few governance or regulatory constraints exist to slow the development of AI at this point. The combination of high collective p (doom) and adoption of an e/acc approach to rapidly developing technology feels especially irresponsible to me, an admitted non-technologist.
Exponential growth
As a non-technologist my main personal insight into artificial intelligence is via analogy. I’ve been writing and teaching for years about how compound interest, a key theme
of personal finance, is underappreciated because our limited and linear human brains can’t conceive of the speed and impact of exponential growth mathematics once an exponential thing acquires a certain momentum.
Humans are shocked, every time, at how huge viruses, money, social media views, computing speeds and, yes, artificial intelligence become once exponential growth takes off.
Exponential-growth things start out looking small and innocuous, like the farthestaway visible vehicle barely imperceptible on a desert horizon. What we don’t tend to realize is that this vehicle is moving 300 mph barreling down on us and accelerating throughout its approach. By the time it gets close enough for us to notice details and start to understand its potential impact it roars past us, unreachable and unstoppable.
That, I fear, is artificial intelligence right now. And 2023 is the year we non-experts spotted AI far away on the horizon, an interesting new semi-mirage with seemingly plenty of time for humanity to
But that’s not how exponential growth works and we probably have very little time.
Artificial intelligence has not yet reordered everything on the planet but evidence is picking up that in fact, very soon, nothing will be the same.
As professor Deliche put it to me: “People in other sectors of society are just beginning to grasp how incredibly disruptive these technologies might become. As a result of recent high-profile announcements from Meta and Google, combined with the propagation of AI-related content on TikTok, Instagram, X, and Facebook, we have reached a tipping point” in people’s awareness.
I would personally put my awareness at, oh, roughly the first week of December 2023. Maybe yours is earlier? Or maybe yours is today.
Binary thinking
I have some further fears about the future of AI and our inability to make it humane.
Computer engineering, at its root, is about binary thinking. Ones and zeros. Successful programmers thrive in a world of pure logic, sorting out puzzles through ever-more complex
binaries. While the Silicon Valley folks building the AI future as we speak are extremely good at this kind of thinking, are there enough people collectively in that world who excel at the other kinds of thinking?
The kind that values poetry, empathy and ambiguity? In short, where does humanity get valued in the race to develop the world’s most powerful super-intelligence?
A simple diagnosis of our struggles as a society in 2023 is our discomfort with uncertainty. I mean, our own political system is barely surviving the rise of Twitter bot farms, which exploit our tendency to prefer Manichean approaches to conflict.
If AI is on the cusp of reordering basically everything — which I think it is — who will make the case for the irrational? The imperfect? The human?
Delwiche strikes an optimistic note that her Trinity students in the arts and humanities are actually finding niches in the industry. Despite the professor’s high p (doom), she’s actually excited in a way that I struggle to remain: “While mass extinction is just one theoretically possible outcome of artificial intelligence, these tools are already being used in countless ways to improve people’s lives.”
I appreciate the latter part of that phrase. It’s the first part, the mass extinction part, that’s giving me pause. I’ve only begun to observe AI on the far-off horizon this year, but our AI future — for better and worse — is accelerating toward us.
P.S. If you would like to freak yourself out by reading the thoughts of one of the leading AI alarmists, published months before the release of Chat GPT-4, do a Google search for “Eliezer Yudkowsky AGI ruin.” You may never sleep soundly again, assuming you care about human survival.