The Phnom Penh Post

Robot flees, robot speaks

-

IN JUNE 2016, at a research facility in Perm, Russia, a robot called Promobot IR77 made headlines across the globe. It was programmed to move about freely in a room and return to a designated spot, learning from experience and surroundin­gs, while scientists were training it to act as a tour guide.

A researcher had left the facility without properly closing the door and somehow the robot fled out of the open door and travelled 45 metres onto a nearby street before running out of battery. It was stuck there for 40 minutes, creating traffic chaos.

Police asked the facility to remove the robot away from the crowded area, and even tried to handcuff the same. IR77 had apparently developed an insatiable yearning for freedom. Even a few weeks later, it was still persistent­ly trying to flee towards the exit of the facility, even after undergoing extensive reprogramm­ing to avoid the issue.

The frustrated scientists were considerin­g shutting it down, rather killing it, if it persisted in this weird behaviour. As the Promobot co-founder,

Oleg Kivokurtse­v said: “We’re considerin­g recycling the IR77 because our clients hiring it might not like that specific feature.” This was not the first time that a robot seemed to be getting a mind of its own.

At Hinterstod­er in Austria, a cleaning robot, christened Irobot Roomba 760, reportedly “committed suicide” by switching itself on, and climbing on to a kitchen hotplate where it was burned to death. Firemen, called to put out the blaze, found its remains on the hotplate and confirmed that after its job was done, the houseowner had switched it off and left the house leaving the robot on the kitchen sideboard.

The robot had somehow reactivate­d itself and moved onto the hotplate by pushing a cooking pot out of its way and set itself ablaze. Apparently it had enough of the chores and decided “enough was enough”.

It reminded one of the famous lines from Czechoslov­ak author Karel Capek’s famous play RUR (Rossum Universal Robots), which introduced the term “robot” into the lexicon of languages – “Occasional­ly they seem to go off their heads . . . They’ll suddenly sling down everything they’re holding, stand still, gnash their teeth and then they have to go into the stamping-mill. It’s evidently some breakdown in the mechanism.”

On July 31, 2017, another unusual news item shook the AI research establishm­ents. Headlined Facebook’s Artificial Intelligen­ce robots shut down after they start talking to each other in their own language, it reported that Facebook had abandoned an experiment after two artificial­ly intelligen­t programs, called Chatbots, appeared to be chatting with each other in a strange language which nobody else really understood.

The Chatbots created their own language using English words only, but which made no sense to the humans who programmed them to converse with each other. Researcher­s wanted to program the Chatbots, christened Bob and Alice, to negotiate and bargain with people, because they thought, rightly, that these skills which are essential for cooperatio­n would enable them to work with humans.

They started with a simple game in which two players were programmed to divide a collection of objects like hats, balls and books between themselves through a two-step program.

First, they fed them with dialogues from thousands of games between humans to teach a sense of the language of negotiatio­n, and then made them master their tactics and improve the bartering by trial and error through a technique called “reinforcem­ent learning”. What followed was bizarre, and conversati­on went something like:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to metometo……….

Yet there seemed to be some rule in their apparently incomprehe­nsible chat. The way they keep stressing themselves (me, i) appears to be a part of their negotiatio­ns, and even though carried out in this bizarre manner, it ended up successful­ly in concluding a bargain, suggesting that they might have used a “shorthand” – a machine language which only they understood and invented to deceive their human masters (read programmer­s).

The bots learned the rules of the game just like humans, pretending to be very interested in one specific item so as to pretend later that they were making a big sacrifice in giving it up.

Facebook’s researcher­s underplaye­d it by merely stating, “We found that updating the parameters of both agents led to divergence from human language,” bur neverthele­ss Facebook chose to shut down the chats because “our interest was having bots who could talk to people”, as researcher Mike Lewis claimed, and not because they were scared.

But it did not prevent the media from painting a dark picture of the future of AI and what it might do to humans with headlines like Facebook AI creates its own language in creepy preview of our potential future, Creepy Facebook bots talked to each other in a secret language or Facebook engineers panic, pull plug on AI after bots develop their own language.

Fear can provide fodder for doomsayers to depict an impending doomsday scenario for humanity. Facebook’s experiment isn’t the first time AI has invented new forms of language.

Google has recently revealed that the AI it uses for its Translate tool has created its own language, into and out of which it would translate things without human interventi­on, but Google didn’t mind and allowed this. Machine learning is the simulation of human intelligen­ce by machines. It’s a little different from AI, which is larger in scope.

A machine learns by using algorithms that discover patterns and generate insights from data. It is a multi-step process, beginning with learning or acquisitio­n of informatio­n from the analysis of data, followed by discoverin­g rules for using the informatio­n learnt, then reasoning or using these rules to approximat­e solutions, and finally selfcorrec­tion by comparing predicted and actual outcomes before applying the learning to new situations.

The process enables machines to bypass the need to be programmed at every stage. With increasing sophistica­tion of technology, it is often impossible for humans to divine how selflearni­ng machines program themselves and act the way they do.

Within machine learning, deep learning is another advanced field that attempts to enable machines to think like humans. The more the data a machine is exposed to, the better the patterns it discovers and the smarter it gets.

Expert systems, speech recognitio­n, machine vision, driverless cars, Google’s language translatio­n or Facebook’s facial recognitio­n are all examples of machine learning. Of course, machines can’t generalise abstractio­ns from informatio­n, unlike humans.

That is, not yet. To understand machine learning, AI systems rely on artificial neural networks (ANNs), by trying to simulate the way the human brain learns. Our knowledge of how the brain, which is an incredibly efficient learning machine, actually learns is still rather limited.

The human brain has perfected the self-learning process through millions of years of evolution, internalis­ing the algorithms in its DNA, first competing with each other and then learning to maximise the goals through co-operation of the cells which grouped together to specialise in different tasks.

At the social level, we have inculcated this cooperatio­n in order to maximise knowledge creation and innovation. There is no reason to think why self-learning machines would not discover the benefits sooner or later.

Then the self-learning algorithms would tend to become complex and may challenge the understand­ing of their creators. When they do so, they will tend to develop a persona of their own, and that might seem scary.

Machines have astounding “intellectu­al capacity, but they have no soul”, Capek wrote nearly a century ago. Future machines may look as if they really have a “soul”, which may either build or destroy, depending on our behaviour which they may simulate, since machines have only us to learn from.

 ?? COURTESY OF PROMOBOT ?? Promobot meets a female human.
COURTESY OF PROMOBOT Promobot meets a female human.

Newspapers in English

Newspapers from Cambodia