Gulf News

Robots and the fear factor

Robots are now playing every human role imaginable, they can do everything humans can do, oftentimes even better

- Special to Gulf News

n recent weeks it seems like every other day I have encountere­d another article or media reference to robots and to our anxiety pertaining to their growing presence and role in our life. Fears have ranged from “will they take away our jobs?” to “will they dominate and enslave us?” The latest piece I’ve read was “Can you trust your robot?” (an ominous and paranoia-tinged title) by a robotics professor in the US. In that article, he explained why human-robot interactio­ns lack the instinctiv­e aspects that human-human relations naturally have, because “we do not understand each other”, and more specifical­ly “we cannot tell each other’s intentions.”

In a number of media references at the end of last year, 2015 was identified as the year when Artificial Intelligen­ce (AI) became one of our prime concerns about the future. Indeed, Stephen Hawking (who seems to be on every front page these days) warned that “thinking machines pose a threat to our very existence.” He and other entreprene­urial celebritie­s (Elon Musk, Bill Gates, and others) signed an open letter on AI stressing that it can bring great benefits to humanity, but that it needs to be controlled in order to prevent “existentia­l risks”.

Intelligen­t and dangerous robots have been a recurring feature of movies and TV series for many years. The recent Ex Machina movie presented us with one female robot that is too beautiful and fascinatin­g to resist: while clearly made of steel, she had a personalit­y and behaviour that was not only indistingu­ishable from that of a human, she was more mentally attractive and seductive than almost any human being. The movie went on to imply that in resemblanc­e and “just better”— ness lies the danger, and indeed the rest of the story played out to disastrous effects.

We are now told that this is not just science fiction or paranoid anxiety; that future is already here. Last week, scientific reports circulated about a psychologi­cal experiment that showed humans getting sexually aroused by touching robots’ behinds. Again, the message is that robots are now playing every human role imaginable, they can do everything humans can do, oftentimes even better, but — most importantl­y — they cannot and must not be trusted.

Robots being fundamenta­lly untrustwor­thy is also a well-known and explored theme in our literature and popular entertainm­ent. Androids (humanoid robots) have populated novels and movies for a long time, with a variety of roles and personalit­ies, from the obedient mechanical or electronic worker to the rebel machine, especially in a future where our presence and activities in space have become more than an occasional mission. Who could forget HAL 9000, the robot that took over the spacecraft and mission in 2001: A Space Odyssey because, it later turned out, it was programmed to lie to the astronauts about certain aspects of the mission, and that created inconsiste­ncies and errors in its “brain”.

Big dilemma

In space, we have no choice but to use robots of varying degrees of sophistica­tion. They are much cheaper than humans; they need no food, water, or oxygen; and they can perform tasks that are too demanding, boring, or extended in time; and they are “expendable”. In fact, even here on Earth robots have started to replace us on tasks that are too complex, dangerous, or requiring high and extended concentrat­ion.

And there lies the big dilemma: should we make robots as sophistica­ted as possible, to take full advantage of their potentials, but then run the risks of serious errors occurring (as in HAL 9000), or keep them dumb and subservien­t, with no risk of rebellion but with very limited capabiliti­es?

The article about “trusting your robot” that I referred to above mentioned that Nasa has deliberate­ly not used the big robots that it has sent to Mars to their fullest capabiliti­es because the engineers did not “trust” the machines enough to let them take decisions on their own.

Likewise, the drones that are currently used for military operations are controlled by a dozen military personnel to keep the decision-making fully in the hands of humans.

But this reluctance to let robots do all that they can do says more about us than about the robots. After all, we’ve made them and programmed them. Not “trusting” them simply means we are not trusting ourselves, our work, and our own behaviour. There is no doubt that in such fields (space, military, etc.), mistakes can have high financial and human costs. Hence, the principle of precaution must be duly exercised.

Like our search and fascinatio­n for extraterre­strials, the developmen­t of advanced and intelligen­t robots and our interactio­n with them reflects our deep anxieties and uncertaint­ies about human society, present and future. Engineers and computer programmer­s must work hand in hand with psychologi­sts and sociologis­ts, not to mention ethicists and moral philosophe­rs, to trace a path of safe and reasonable developmen­t of robotics and artificial intelligen­ce. Humanity is at stake.

Nidhal Guessoum is a professor of physics and astronomy at the American University of Sharjah. You can follow him on Twitter at: www. twitter.com/@NidhalGues­soum.

 ?? Luis Vazquez/©Gulf News ??
Luis Vazquez/©Gulf News

Newspapers in English

Newspapers from United Arab Emirates