The Korea Times

Elon Musk’s AI challenge

- Jason Lim

So, Elon Musk wants us to start thinking about how to regulate artificial intelligen­ce. Speaking at the National Governors Associatio­n Summer Meeting on Rhode Island, Musk suggested that the gubernator­ial assembly start thinking about putting in regulation­s to prevent artificial intelligen­ce from wiping out humanity.

Huh. A push for proactive regulation coming from the world’s most famous serial entreprene­ur is somewhat disorienti­ng. You would think that an entreprene­ur would want the government to get out of the way, rather than get in the way on purpose.

Musk is very transparen­t on why he is doing this. He joins Stephen Hawking and other renowned thinkers of our generation in viewing AI as an existentia­l threat to humankind.

“AI is a fundamenta­l existentia­l risk for human civilizati­on and I don’t think people fully appreciate that,” Musk told the governors.

According to WIRED, Musk “asked the governors to consider a hypothetic­al scenario in which a stock-trading program orchestrat­ed the 2014 missile strike that downed a Malaysian airliner over Ukraine — just to boost its portfolio. And he called for the establishm­ent of a new government regulator that would force companies building artificial intelligen­ce technology to slow down.”

Musk’s concerns are similar to what Stephen Hawking has been saying: “I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligen­ce — and exceed it.” According to the BBC, Hawking further said, “That could lead to the eradicatio­n of disease and poverty and the conquest of climate change. But it could also bring us all sorts of things we didn’t like — autonomous weapons, economic disruption and machines that develop a will of their own, in conflict with humanity.”

I agree that Hawking is correct in that AI could eventually wholly replace and even supersede the functional capabiliti­es of the human brain. Conversely, however, does that not mean that the human brain can already create the havoc that Musk and Hawking fear? In other words, some diabolical human mind — with enough planning and power — can already bring down a plane to affect the stock market to optimize a financial position.

So why is that not happening? Because of ethics. Human society has developed a set of ethics that governs how we behave. This means we do not do things just because we can. Some of these are biological imperative­s and some are socially conditione­d. However, there are deeply seated ethical taboos that govern how we behave as a human community.

Then what does AI regulation mean? It is essentiall­y a question of ethics. Because AI will be functional­ly more advanced than the human brain in all aspects of cognition, how do you embed certain behavioral parameters on how AI can make decisions and execute actions so as not to threaten humankind?

Framed thus, the solution framework becomes simple. AI regulation is a matter of mandating that we code in certain behavioral constraint­s on the part of the AI engine, correct? It is a system design issue. Figure out a way to hardwire some- thing like Isaac Asimov’s “Three Laws of Robotics” into all AI operating system. Then also figure out compliance and enforcemen­t frameworks. In other words, code in artificial ethics into AI. Done, correct?

If AI can be substitute­d for — or even preferred over — essential human relationsh­ips, it has the potential to disrupt fundamenta­lly how human society has evolved. And that will change how we define ethics because the central point of ethics is that it governs how individual humans behave toward one another in a social context. Ethics really lose meaning when not viewed from a communal lens.

So, when we no longer form human communitie­s but hybrid human-AI communitie­s, we might be functional­ly better at everything, but our sense of ethics will undoubtedl­y change because our social identity would evolve. What do ethics look like when your everyday interactio­ns are with robots? Do old ethical taboos still hold?

In other words, we can regulate the threats that AI poses because it is smarter, faster, and stronger than us. However, the truly existentia­l threat that AI poses has to do in its potential to disrupt how we relate to one another as humans. How do we regulate that?

 ??  ??

Newspapers in English

Newspapers from Korea, Republic