Houston Chronicle Sunday

AI is here, but let’s not just sit back and watch the mayhem unfold

- By Rodrigo Ferreira Rodrigo Ferreira is an assistant teaching professor in computer science at Rice University and holds affiliate faculty appointmen­ts in philosophy and at the Baker Institute for Public Policy.

The recent situation at OpenAI played out like a masterfull­y scripted TV drama. Or at least, that’s how multiple media sources portrayed it: Vanity Fair referred to the drama at the company behind ChatGPT as a “soap opera,” and news outlets such as Wired, Forbes and NBC drew a direct connection to the widely acclaimed TV show “Succession,” where family members backstab each other and corporate rivals.

To be fair, some of the allusions to scripted dramas are completely warranted. In just a span of a few days, Sam Altman, CEO of OpenAI, was fired from his role, hired to a new position by Microsoft, and then ultimately reinstated as OpenAI CEO, all while some of the board members who wanted to remove him were ousted instead. As Wired wrote,“This whole story had so many hellzapopp­in twists and turns over the past five days that it was tempting to sit back and enjoy the fun, like that ubiquitous GIF of Michael Jackson tossing popcorn kernels in his mouth.”

But as remarkably entertaini­ng and “Succession”-like as this narrative has been, I think it’s critical to think more deeply about the difference between the events’ dramatic qualities and their real-life consequenc­es.

Unlike what happens on TV shows, what happens with OpenAI does not just stay on the screen. Concerns about the potential impact of AI as it refers to bias, disinforma­tion, privacy, labor automation, creativity, education are now well documented, not to mention more abstract concerns about human alienation, and way more speculativ­ely, human extinction. It is indisputab­le that AI can and most likely will bring great advances to society in areas such as health, economic production, and informatio­n management, to name a few — but its potential negative impacts are as real as those positive ones. And, perhaps — as has been the case before with other recent applicatio­ns of technology, such as crime prediction software and lending algorithms — historical­ly marginaliz­ed population­s will bear more than their share of the downside.

It’s important that we ask who gets to play a part in the real-life continuati­on of this narrative: Who gets to script AI’s and OpenAI’s story? Prestige TV dramas like “Succession” are often written by a close-knit group of writers holed up together and removed from the public — and so, it seems, are powerful executives and investors in boardrooms in Silicon Valley who make decisions about our tech-driven future.

But that’s where the analogy breaks apart. If we don’t like what the tech elite creates for us, we can’t just stream something different. And in a democratic society like our own, people have — through their elected representa­tives — the power to have some say in what goes on in these boardrooms.

Government regulation of tech is of course a thorny subject. Invariably, questions arise regarding whether regulation “stifles innovation.” Technology corporatio­ns push to prevent government interventi­on, both here in the U.S. and elsewhere around the world.

Sometimes, as part of that push, they talk about ethics; detractors say that’s a way to show that they can regulate themselves and therefore stave off government interventi­on. In fact, the dispute that led to Altman’s firing seems, at least in part, to be about whether or not Altman remained committed to the company’s founding not-for-profit mission and its unique governance structure. Many of those questions remain unanswered.

To some degree, we already know what happens when technology develops in the absence of any checks or balances.

In the mid-2000s, technology companies started to develop new and remarkably effective ways to monetize everyday social exchanges — things like sharing thoughts and pictures with friends; communicat­ing important updates and news; finding temporary work, means of transporta­tion, or accommodat­ions for travel.

Since then, any social process that required expert or institutio­nal guidance increasing­ly became seen as an obstacle to the move-fast-and-breakthing­s ethos of Silicon Valley. In their place, the computatio­nal logic of “optimizati­on” and the business-based glorificat­ion of “disruption” increasing­ly appeared as a “solution” to these so-called problems and enabled tech corporate leaders to amass an increasing amount of power over social life.

In many ways, this period represente­d the wish-fulfillmen­t of the 1990s cyberpunk culture. We see it in movies like The Matrix, where a lonely and misunderst­ood hacker, chasing down questions with his genius technical skills and a rebel attitude, one day becomes “the One” to liberate himself and others from a boring and unjust world. (See also Apple’s “1984” commercial for the Macintosh.)

Please do not get me wrong. No doubt corporate leaders such as Elon Musk, Mark Zuckerberg and Altman are all brilliant engineers, and no doubt also in many ways brilliant businessme­n.

But we have also seen what happens when, by sheer wealth and power, Musk decided to become the king of Twitter — yes, a private company, but also a crucial site of social and political discussion. We’ve also seen the impact of letting social media run without restraint towards children and teenagers, or what happens when it’s wielded by people with nefarious commercial and/or political interests.

So what does that tell us about AI? As mentioned, we have reasonable cause for concern about bias, privacy, disinforma­tion, and other critical issues. We’ve also seen that, despite warnings by AI experts and scholars, AI innovation seems to continue moving forward at neck-breaking speed. Just this week, Google announced the launch of its new AI model, which they claim is more powerful than ChatGPT.

Taking all this into considerat­ion, do we feel comfortabl­e simply allowing this technology and its current corporate imperative­s to expand unrestrict­ed? Or are there ways that we can do so more thoughtful­ly and responsibl­y through our representa­tive democratic systems?

Around the world, AI regulatory frameworks are taking shape. Europe has embraced the need for some regulatory oversight over AI. Some countries in Latin America have also acted. The U.S. has made important strides in this area with its Blueprint for an AI Bill of Rights and its recent Executive Order on Safe, Secure, and Trustworth­y Artificial Intelligen­ce.

But there is still much work to be done. It’s unclear exactly how regulation­s will be enforced, what corporatio­ns’ incentives will be to comply, and how much funding will be available for education and research to promote Responsibl­e AI.

These are key actions that the U.S. and other government­s around the world will have to take to help prevent AI and AI-developmen­t companies, despite all the good they might otherwise do, from also manifestin­g their worst selves — from maybe becoming the kind of clichéd evil robots or deceitful corporate entities that we often encounter in TV and film.

The past decade’s rapid and profound societal change has maybe left us feeling detached from the decisions that genius computer engineers and billionair­e investors make in Silicon Valley offices — so much so that we think our only option is to just kick back, grab the popcorn, and enjoy the show à la Michael Jackson. But the reality is that government regulation is, in this case, the only thing that can really separate TV narratives from the actual political dramas that affect our lives.

As viewers of popular TV shows, we are mostly powerless over how the story changes and eventually unfolds, but as citizens of a representa­tive democracy we have the power to affect the actions of all actors involved. We didn’t get a say — or at least I didn’t get a say — on the ending of “Succession,” or of “Friends,” or of “Seinfeld,” or of “Game of Thrones,” or of “Lost,” or of many other of my favorite TV shows. If I had, perhaps they’d have ended differentl­y.

But when it comes to AI, through our elected representa­tives, we have the power to have a say in how this technology is developed, for what purposes it is developed, and with what safeguards in place.

Let’s not lose sight of that power, and the reasons for vesting our representa­tives with that power, even if — or as — we continue to be entertaine­d by both real-life and fictional twisty boardroom dramas.

 ?? NurPhoto via Getty Images ?? The real-life drama surroundin­g artificial intelligen­ce firm OpenAI and its CEO, Sam Altman, is frequently compared to an episode of “Succession.”
NurPhoto via Getty Images The real-life drama surroundin­g artificial intelligen­ce firm OpenAI and its CEO, Sam Altman, is frequently compared to an episode of “Succession.”

Newspapers in English

Newspapers from United States