Business Day

Sounding the alarm on AI

• US institute says AI laboratori­es are engaging in ‘out-of-control race’

- Joshua Brustein

In January 2015, the newly formed — and grandly named — Future of Life Institute (FLI) invited experts in artificial intelligen­ce (AI) to spend a long weekend in San Juan, Puerto Rico. The result was a group photo, a written set of research priorities for the field and an open letter about how to tailor AI research for maximum human benefit.

The tone of these documents was predominan­tly upbeat. Among the potential challenges FLI anticipate­d was a scenario in which autonomous vehicles reduced the 40,000 annual US car fatalities by half, generating not “20,000 thank-you notes, but 20,000 lawsuits”. The letter acknowledg­ed it was hard to predict what AI’s exact effect on human civilisati­on would be — it laid out some potentiall­y disruptive consequenc­es — but also noted that “the eradicatio­n of disease and poverty are not unfathomab­le”.

The open letter FLI published on March 29 2023 was, well, different. The group warned that AI labs were engaging in “an out-of-control race to develop and deploy ever more powerful digital minds that no-one — not even their creators — can understand, predict, or reliably control”. It called for an immediate pause on the most advanced AI research and attracted thousands of signatures — including those of many prominent figures in the field — as well as a round of mainstream media coverage.

For anyone trying to wrap their heads around the freakout over AI, the letter was instructiv­e on multiple levels. It’s a vivid example of how the conversati­ons about new technologi­es can shift with jarring speed from wide-eyed optimism to deep pessimism.

The vibe at the 2015 Puerto Rico event was positive and collegial, says Anthony Aguirre, FLI’s vice-president and secretary of its board. He also helped draft the recent letter, inspired by what he argues is a distressin­g turn in the developmen­t of the technology.

“What there wasn’t then was giant companies competing with one another,” he says.

Looking back, the risk that self-interested technology companies would come to dominate the field seems obvious. But that concern isn’t reflected anywhere in the documents from 2015. Also absent was any mention of the industrial-scale disseminat­ion of misinforma­tion, an issue that many tech experts now see as one of the most frightenin­g consequenc­es of powerful chatbots in the near term.

Then there was the reaction to March’s letter. Predictabl­y, leading AI companies such as OpenAI, Google, Meta Platforms and Microsoft gave no indication that it would lead them to change their practices. FLI also faced blowback from many prominent AI experts, partially because of its associatio­n with the polarising effective altruism movement and Elon Musk, a donor and adviser known for his myriad conflicts of interest and attention-seeking antics.

Aside from any intra-Silicon Valley squabbles, critics say FLI was doing damage not for voicing concerns, but for focusing on the wrong ones. There’s an unmistakab­le tinge of existentia­l threat in FLI’s letter, which explicitly raises the prospect of humans losing control of the civilisati­on we’ve built. Fear about computer super intelligen­ce is a longstandi­ng topic within tech circles, but so is the tendency to vastly overstate the capabiliti­es of whatever technology is the subject of the latest hype cycle (see also: virtual reality, voice assistants, augmented reality, the blockchain, mixed reality, and the internet of things, to name a few).

PEOPLE ARE WORRIED ABOUT ARTIFICIAL INTELLIGEN­CE, BUT THE RISKS OF DISINFORMA­TION ARE MORE WORRYING THAN APOCALYPTI­C SCENARIOS

Predicting that autonomous vehicles could halve traffic fatalities and warning that AI could end human civilisati­on seem to reside on opposite ends of the techno-utopian spectrum. But they actually both promote the view that what Silicon Valley is building is far more powerful than laypeople understand.

Doing this diverts from less sensationa­l conversati­ons and undermines attempts to address the more realistic problems, says Aleksander Madry, faculty co-lead of Massachuse­tts Institute of Technology’s AI Policy Forum. “It’s really counterpro­ductive,” he says of FLI’s letter. “It will change nothing, but we’ll have to wait for it to subside to get back to serious concerns.”

The leading commercial labs working on AI have been making major announceme­nts in rapid succession. OpenAI released ChatGPT less than six months ago and followed with GPT-4, which performs better on many measures but whose inner workings are largely a mystery to people outside the company. Its technology is powering a series of products released by Microsoft, OpenAI’s biggest investor, some of which have done unsettling things, such as professing love for human users.

Google rushed out a competing chatbot-powered search tool, Bard.

Meta Platforms recently made one of its AI models available to researcher­s who agreed to certain parameters, and then the code quickly showed up for download elsewhere on the web.

“In a sense we’re already in the worst-of-both-worlds scenario,” says Arvind Narayanan, a professor of computer science at Princeton University. The best AI models are controlled by a few companies, he says, “while slightly older ones are widely available and can even run on smartphone­s”. He says he’s less concerned about bad actors getting their hands on AI models than AI developmen­t happening behind the closed doors of corporate research labs.

OpenAI, despite its name, takes essentiall­y the opposite view. After its initial formation in 2015 as a non-profit that would produce and share AI research, it added a for-profit arm in 2019 (albeit one that caps the potential profits its investors can realise). Since then it’s become a leading proponent of the need to keep AI technology closely guarded, lest bad actors abuse it.

In blog posts, OpenAI has said it can anticipate a future in which it submits its models for independen­t review or even agrees to limit its technology in key ways. But it hasn’t said how it would decide to do this. For now it argues that the way to minimise the damage its technology can cause is to limit the level of access its partners have to its most advanced tools, governing their use through licensing agreements.

INFORMATIO­N

The controls on older and less powerful tools don’t necessaril­y have to be as strong, says Greg Brockman, an OpenAI co-founder who’s now its president and chair. “You want to have some gap so that we have some breathing room to really focus on safety and get that right,” he says.

It’s hard not to notice how well this stance dovetails with OpenAI’s commercial interests — a company executive has said publicly that competitiv­e considerat­ions also play into its view on what to make public. Some academic researcher­s complain that OpenAI’s decision to withhold access to its core technology makes AI more dangerous by hindering disinteres­ted research. A company spokespers­on says it works with independen­t researcher­s, and went through a six-month vetting process before releasing the latest version of its model.

OpenAI’s rivals question its approach to the big questions surroundin­g AI. “Speaking as a citizen, I always get a little bit quizzical when the people saying ‘this is too dangerous’ are the people who have the knowledge,” says Joelle Pineau, vice-president for AI research at Meta and a professor at McGill University. Meta allows researcher­s access to versions of its AI models, saying it hopes outsiders can probe them for implicit biases and other shortcomin­gs.

The drawbacks of Meta’s approach are already becoming clear. In late February, the company gave researcher­s access to a large language model called LLaMA —a technology similar to the one that powers ChatGPT. Researcher­s at Stanford University soon said they would use the model as a basis for their own project that approximat­ed advanced AI systems with about $600 of investment. Pineau says that she hadn’t assessed how well Stanford’s system worked, though she says such research was in line with Meta’s goals.

But Meta’s openness, by definition, came with less control over what happened with LLaMA. It took about a week before it showed up for download on 4chan, one of the main message boards of choice for internet trolls. “We’re not thrilled about the leak,” Pineau says. There may never be a definitive answer about whether OpenAI or Meta has the right idea — the debate is only the latest version of one of Silicon Valley’s oldest fights. But their divergent paths do highlight how the decisions about putting safeguards on AI are being made entirely by executives at a few large companies.

In other industries, the release of potentiall­y dangerous products comes only after private actors have satisfied public agencies that they’re safe. In a March 20 blog post, the Federal Trade Commission warned technologi­sts that it “has sued businesses that disseminat­ed potentiall­y harmful technologi­es without taking reasonable measures to prevent consumer injury”. Ten days later the Centre for AI & Digital Policy, an advocacy group, filed a complaint with the commission, asking it to halt OpenAI’s work on GPT-4.

Being able to build something but refraining from doing so isn’t a novel idea. But it pushes against Silicon Valley’s enduring impulse to move fast and break things. While AI is far different from social media, many of the players involved in this gold rush were around for that one, too. The services were deeply entrenched by the time policymake­rs began trying to respond in earnest, and they’ve arguably achieved very little.

In 2015, it still seemed as if there was lots of time to deal with whatever AI would bring.

That seems less true today.

 ?? /123RF ?? Change of focus: Predicting that autonomous vehicles could halve traffic fatalities and warning that AI could end human civilisati­on seem to reside on opposite ends of the techno-utopian spectrum.
/123RF Change of focus: Predicting that autonomous vehicles could halve traffic fatalities and warning that AI could end human civilisati­on seem to reside on opposite ends of the techno-utopian spectrum.

Newspapers in English

Newspapers from South Africa