Elon Musk’s AI challenge
So, Elon Musk wants us to start thinking about how to regulate artificial intelligence. Speaking at the National Governors Association Summer Meeting on Rhode Island, Musk suggested that the gubernatorial assembly start thinking about putting in regulations to prevent artificial intelligence from wiping out humanity.
Huh. A push for proactive regulation coming from the world’s most famous serial entrepreneur is somewhat disorienting. You would think that an entrepreneur would want the government to get out of the way, rather than get in the way on purpose.
Musk is very transparent on why he is doing this. He joins Stephen Hawking and other renowned thinkers of our generation in viewing AI as an existential threat to humankind.
“AI is a fundamental existential risk for human civilization and I don’t think people fully appreciate that,” Musk told the governors.
According to WIRED, Musk “asked the governors to consider a hypothetical scenario in which a stock-trading program orchestrated the 2014 missile strike that downed a Malaysian airliner over Ukraine — just to boost its portfolio. And he called for the establishment of a new government regulator that would force companies building artificial intelligence technology to slow down.”
Musk’s concerns are similar to what Stephen Hawking has been saying: “I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.” According to the BBC, Hawking further said, “That could lead to the eradication of disease and poverty and the conquest of climate change. But it could also bring us all sorts of things we didn’t like — autonomous weapons, economic disruption and machines that develop a will of their own, in conflict with humanity.”
I agree that Hawking is correct in that AI could eventually wholly replace and even supersede the functional capabilities of the human brain. Conversely, however, does that not mean that the human brain can already create the havoc that Musk and Hawking fear? In other words, some diabolical human mind — with enough planning and power — can already bring down a plane to affect the stock market to optimize a financial position.
So why is that not happening? Because of ethics. Human society has developed a set of ethics that governs how we behave. This means we do not do things just because we can. Some of these are biological imperatives and some are socially conditioned. However, there are deeply seated ethical taboos that govern how we behave as a human community.
Then what does AI regulation mean? It is essentially a question of ethics. Because AI will be functionally more advanced than the human brain in all aspects of cognition, how do you embed certain behavioral parameters on how AI can make decisions and execute actions so as not to threaten humankind?
Framed thus, the solution framework becomes simple. AI regulation is a matter of mandating that we code in certain behavioral constraints on the part of the AI engine, correct? It is a system design issue. Figure out a way to hardwire some- thing like Isaac Asimov’s “Three Laws of Robotics” into all AI operating system. Then also figure out compliance and enforcement frameworks. In other words, code in artificial ethics into AI. Done, correct?
If AI can be substituted for — or even preferred over — essential human relationships, it has the potential to disrupt fundamentally how human society has evolved. And that will change how we define ethics because the central point of ethics is that it governs how individual humans behave toward one another in a social context. Ethics really lose meaning when not viewed from a communal lens.
So, when we no longer form human communities but hybrid human-AI communities, we might be functionally better at everything, but our sense of ethics will undoubtedly change because our social identity would evolve. What do ethics look like when your everyday interactions are with robots? Do old ethical taboos still hold?
In other words, we can regulate the threats that AI poses because it is smarter, faster, and stronger than us. However, the truly existential threat that AI poses has to do in its potential to disrupt how we relate to one another as humans. How do we regulate that?