Khaleej Times

AI needs the human touch to be a force for good

Many people developing applicatio­ns are not aware of the wider social implicatio­ns of their work

- SETH BAUM

Recent advances in artificial intelligen­ce have been nothing short of dramatic. AI is transformi­ng nearly every sector of society, from transporta­tion to medicine to defense. So it is worth considerin­g what will happen when it becomes even more advanced than it already is.

The apocalypti­c view is that AI-driven machines will outsmart humanity, take over the world, and kill us all. This scenario crops up often in science fiction, and it is easy enough to dismiss, given that humans remain firmly in control. But many AI experts take the apocalypti­c perspectiv­e seriously, and they are right to do so. The rest of society should, as well.

To understand what is at stake, consider the distinctio­n between “narrow AI” and “artificial general intelligen­ce” (AGI). Narrow AI can operate only in one or a few domains at a time, so while it may outperform humans in select tasks, it remains under human control.

AGI, by contrast, can reason across a wide range of domains, and thus could replicate many human intellectu­al skills, while retaining all of the advantages of computers, such as perfect memory recall. Run on sophistica­ted computer hardware, AGI could outpace human cognition. In fact, it is hard to conceive an upper limit for how advanced AGI could become.

As it stands, most AI is narrow. Indeed, even the most advanced current systems have only limited amounts of generality. For example, while Google DeepMind’s AlphaZero system was able to master Go, chess, and shogi — making it more general than most other AI systems, which can be applied only to a single specific activity — it has still demonstrat­ed capability only within the limited confines of certain highly structured board games.

Many knowledgea­ble people dismiss the prospect of advanced AGI. Some, argue that it is impossible for AI to outsmart humanity. Others, such argue that human-level AI may be possible in the distant future, but that it is far too early to start worrying about it now.

These skeptics are not marginal figures, like the cranks who try to cast doubt on climate-change science. They are distinguis­hed scholars in computer science and related fields, and their opinions must be taken seriously.

Regardless of whether narrow AI and AGI are considered together or separately, we must take constructi­ve action now to minimise the risk of a catastroph­e down the road

Yet other distinguis­hed scholars do worry that AGI could pose a serious or even existentia­l threat to humanity. With experts lining up on both sides of the debate, the rest of us should keep an open mind.

Moreover, AGI is the focus of significan­t research and developmen­t. I recently completed a survey of AGI R&D projects, identifyin­g 45 in 30 countries on six continents. Many active initiative­s are based in major corporatio­ns such as Baidu, Facebook, Google, Microsoft, and Tencent, and in top universiti­es such as Carnegie Mellon, Harvard, and Stanford, as well as the Chinese Academy of Sciences. It would be unwise simply to assume that none of these projects will succeed.

Another way of thinking about the potential threat of AGI is to compare it to other catastroph­ic risks. In the 1990s, the US Congress saw fit to have NASA track large asteroids that could collide with the earth, even though the odds of that happening are around one in 5,000 per century. With AGI, the odds of a catastroph­e over the upcoming century could be as high as one in a hundred, or even one in ten, judging by the pace of R&D and the strength of expert concern.

The question, then, is what to do about it. For starters, we need to ensure that R&D is conducted responsibl­y, safely, and ethically. This will require a deeper dialogue between those working in the AI field and policymake­rs, social scientists, and concerned citizens. Those in the field know the technology and will be the ones to design it according to agreed standards; but they must not decide alone what those standards will be. Many of the people developing AI applicatio­ns are not accustomed to thinking about the social implicatio­ns of their work. For that to change, they must be exposed to outside perspectiv­es.

Policymake­rs also will have to grapple with AGI’s internatio­nal dimensions. Currently, the bulk of AGI R&D is carried out in the United States, Europe, and China, but much of the code is open source, meaning that the work potentiall­y can be done from anywhere. So, establishi­ng standards and ethical ground rules is ultimately a job for the entire internatio­nal community, though the R&D hubs should take the lead.

Looking ahead, some efforts to address the risks posed by AGI can piggyback on policy initiative­s already put in place for narrow AI. There are many opportunit­ies for synergy between those working on near-term AI risks and those thinking about the long term.

But regardless of whether narrow AI and AGI are considered together or separately, what matters most is that we take constructi­ve action now to minimise the risk of a catastroph­e down the road. This is not a task that we can hope to complete at the last minute. —Project Syndicate Seth Baum is the executive director of the Global Catastroph­ic Risk Institute (GCRI), a think tank focused on

extreme global risks

 ??  ??
 ??  ??

Newspapers in English

Newspapers from United Arab Emirates