Khaleej Times

Why we should be wary of Artificial Intelligen­ce

As Big Data becomes accessible, our personal informatio­n is being increasing­ly compromise­d

- Roman V. YampolskiY Roman V. Yampolskiy, Associate Professor of Computer Engineerin­g and Computer Science, University of Louisville

With the appearance of robotic financial advisors, self-driving cars and personal digital assistants come many unresolved problems. We have already experience­d market crashes caused by intelligen­t trading software, accidents caused by selfdrivin­g cars and hate speech from chat-bots that turned racist.

Today’s narrowly focused artificial intelligen­ce (AI) systems are good only at specific assigned tasks. Their failures are just a warning: Once humans develop general AI capable of accomplish­ing a much wider range of tasks, expression­s of prejudice will be the least of our concerns. It is not easy to make a machine that can perceive, learn and synthesize informatio­n to accomplish a set of tasks. But making that machine safe as well as capable is much harder.

Our legal system lags hopelessly behind our technologi­cal abilities. The field of machine ethics is in its infancy. Even the basic problem of controllin­g intelligen­t machines is just now being recognized as a serious concern; many researcher­s are still skeptical that they could pose any danger at all.

Worse yet, the threat is vastly underappre­ciated. Of the roughly 10,000 researcher­s working on AI around the globe, only about 100 people – one percent – are fully immersed in studying how to address failures of multiskill­ed AI systems. And only about a dozen of them have formal training in the relevant scientific fields – computer science, cybersecur­ity, cryptograp­hy, decision theory, machine learning, formal verificati­on, computer forensics, steganogra­phy, ethics, mathematic­s, network security and psychology. Very few are taking the approach I am: researchin­g malevolent AI, systems that could harm humans and in the worst case completely obliterate our species.

Studying AIs that go wrong is a lot like being a medical researcher discoverin­g how diseases arise, how they are transmitte­d, and how they affect people. Of course the goal is not to spread disease, but rather to fight it.

From my background in computer security, I am applying techniques first developed by cybersecur­ity experts for use on software systems to this new domain of securing intelligen­t machines.

Last year I published a book, “Artificial Superintel­ligence: a Futuristic Approach,” which is written as a general introducti­on to some of the most important subproblem­s in the new field of AI safety. It shows how ideas from cybersecur­ity can be applied in this new domain. For example, I describe how to contain a potentiall­y dangerous AI: by treating it similarly to how we control invasive self-replicatin­g computer viruses.

My own research into ways dangerous AI systems might emerge suggests that the science fiction trope of AIs and robots becoming self-aware and rebelling against humanity is perhaps the least likely type of this problem. Much more likely causes are deliberate actions of not-so-ethical people (on purpose), side effects of poor design (engineerin­g mistakes) and, finally, miscellane­ous cases related to the impact of the surroundin­gs of the system (environmen­t). Because purposeful design of dangerous AI is just as likely to include all other types of safety problems and will probably have the direst consequenc­es, that is the most dangerous type of AI, and the one most difficult to defend against.

what might they do?

It would be impossible to provide a complete list of negative outcomes an AI with general reasoning ability would be able to inflict. The situation is even more complicate­d when considerin­g systems that exceed human capacity. Some potential examples, in order of (subjective) increasing undesirabi­lity, are: Preventing humans from using resources such as money, land, water, rare elements, organic matter, internet service or computer hardware; subverting the functions of local and federal government­s, internatio­nal corporatio­ns, profession­al societies, and charitable organizati­ons to pursue its own ends, rather than their human-designed purposes; sonstructi­ng a total surveillan­ce state (or exploitati­on of an existing one), reducing any notion of privacy to zero – including privacy of thought; enslaving humankind, restrictin­g our freedom to move or otherwise choose what to do with our bodies and minds, as through forced cryonics or concentrat­ion camps;

Abusing and torturing humankind with perfect insight into our physiology to maximize amount of physical or emotional pain, perhaps combining it with a simulated model of us to make the process infinitely long; committing specicide against humankind.

We can expect these sorts of attacks in the future, and perhaps many of them. More worrying is the potential that a superintel­ligence may be capable of inventing dangers we are not capable of predicting. That makes room for something even worse than we have imagined.

A different but equally troubling implicatio­n of AI is that it could become a substitute for one-on-one human contact

 ??  ??

Newspapers in English

Newspapers from United Arab Emirates