Cosmos

HUMANOID ROBOTS ARRIVE

While plenty of scientists are working on humanoid robots, Hiroshi Ishiguro actually wants to build a human. ELIZABETH FINKEL reports.

- ELIZABETH FINKEL is the editor-in-chief of Cosmos.

HIROSHI ISHIGURO, the director of the Intelligen­t Robotics Laboratory (IRL) at Osaka University, is well-known for posing with his android twin. It’s not just a weird publicity stunt; this might be the answer to Japan’s labour crisis. With its greying population – close to 28% of its 127 million people are aged over 65 – below-replacemen­t birth rate and reluctance to ramp up immigratio­n, Japan needs to make its own workers.

It already has plenty of industrial robots. But who will tend to the elderly in overflowin­g nursing homes and, perhaps just as important, who will make them feel cared for? That’s why Ishiguro’s lab has government funding to create ever-more human-like robots – indeed ,with US$5 million every year for five years, the project to create his autonomous humanoid, Erica, receives the largest grant from Japan’s Science and Technology Agency (JST).

Yet Ishiguro himself is a surprise. He doesn’t fit the stereotype of a roboticist, someone more in tune with machines than people. His first ambition was to become an oil painter, and he retains the artist’s basic impulse – to examine the human condition. Asked what drives his mission to build humanoid robots, he replies: “I want to understand what it is to be a human being.”

As artificial intelligen­ce continues to develop “we will have to ask that question more and more”, agrees engineer Elizabeth Croft, who specialise­s in humanrobot interactio­n at the University of British Columbia.

Others find Ishiguro’s work puzzling. “I don’t understand his scientific concept exactly,” says Alin Albu-schäffer, director of the Institute of Robotics and Mechatroni­cs at DLR, the German Aerospace Centre, but he adds: “I like it from a philosophi­cal perspectiv­e. He’s at the extreme, and that provokes change.”

Ishiguro’s work lies somewhere between the practical and the weird. Plenty of places build humanoid robots but they are clearly mechanical representa­tions of human-ness.

Ishiguro is actually trying to build a human. For him it is a way to tackle the mysteries of the human mind: intelligen­ce and consciousn­ess. “We can’t take an analytical approach to find out what a human is,” he says. “We need to take a constructi­ve approach.”

ISHIGURO, NOW 54, SWITCHED from painting to programmin­g at university and was soon drawn to robotics. “I saw that AI needs to have a body,” he tells me at a conference in Melbourne, “because a computer needs to have its own experience­s.”

While AI has progressed in leaps and bounds in recent years, it is still enormously challengin­g to create robots that can manoeuvre themselves in our messy ever-changing world as opposed to the uniform conditions of a factory floor. The Google-built Alphago software can beat the world Go champion but robots don’t stand a chance at beating a team of kids in a game of football.

Robotics companies everywhere are grappling with the challenge. DLR has Justin, who is handy with tools. Honda has Asimo, who can serve drinks. Rethink Robotics has Baxter, who can pass things to a co-worker and whose flat-screen eyes show where its attention is. Boston Dynamics has Atlas, whose latest trick is backflips. No one, though, could mistake these robots for a human. “They are much more R2-D2 than C-3PO,” Croft says.

Most robot makers deliberate­ly keep their creations robot-like. This reflects two guiding principles.

One is to steer well clear of the ‘uncanny valley’ – the creepy feeling when you see almost-but-not-quite human characters in computer games or animations. The other, Albu-schäffer says, is that the large gap in robot vs human intelligen­ce and autonomy should be reflected in the design – “the appearance should reflect the robot’s stage of evolution”.

Ishiguro has headed in the opposite direction, plunging headlong into the uncanny valley.

His Geminoid series of robots are his trademark. The first, made in 2002, was a twin of his five-year-old daughter. Repliee Q1 (2005) was the twin of a Tokyo newsreader. Geminoid H1 (2006) was Ishiguro’s twin. Geminoid F (2010) was modelled on a woman in her twenties (whose identity Ishiguro won’t divulge).

The idea behind making a copy of a real human, Ishiguro says, was to transfer the presence, the sonzai-kan, of that person to the robot. “I focused on human likeness because that’s an extreme goal of robotics,” he tells me. “In a first contact, people will be surprised, but it’s easy to adapt.”

These hyperreal replicas have employed the latest that silicone technology and muscle-like fine-motor circuitry (actuators) can offer. But they are less robots than puppets, their speech and movements controlled by someone sitting at a keyboard.

One of Ishiguro’s key goals is for the humanoids to convey emotion. “When we feel emotion that’s when we begin to make a connection,” he says, “and we forget about the status of the partner.”

To impart expressive­ness to the robots, Ishiguro turned to a master of the art – playwright and director Oriza Hirata, a champion of realism (or ‘quiet drama’) in Japanese theatre. With motion detectors attached to his face, Hirata modelled the gestures Ishiguro wanted his humanoids to express.

The collaborat­ion led to the Robot Theatre Project, which has staged plays around the world. In these performanc­es computer-controlled robots fill in for human actors, delivering pre-recorded lines and choreograp­hed movements.

The company’s repertoire includes Sayonara, a play written by Hirata where an android (played by Geminoid F) tries to console a girl suffering from a fatal illness until its own mechanics go awry. In I, Worker a robot maid loses its motivation to work. The double bill toured North America in 2013. The robot theatre has also performed Franz Kafka’s Metamorpho­sis and

Anton Chekhov’s Three Sisters. A planned performanc­e of Jean Paul Sartre’s No Exit for a major French arts festival in 2015 was cancelled after Sartre’s estate refused permission for robot actors.

Hirata has provided the emotional X-factor to many of Ishiguro’s creations. “We call it the Oriza filter,” says the roboticist. It’s a codified and programmab­le pattern based on the director’s utterances and expression­s: a movement of the body and hands, then the eyes, then the head, then an utterance after a 0.2 second delay. “If we apply the Oriza filter,” Ishiguro says, “our robots become so human-like.”

This choreograp­hy of conversati­on is very consistent between people, he says – so much so that he describes a patent based on Oriza’s movements as “how to represent human likeness”. BUT WHILE SOME OF Ishiguro’s humanoids grow ever more expressive and human, others have developed in the opposite direction.

I shriek in horror when Ishiguro shows me the humanoid he has developed for elderly people with dementia. It resembles a thalidomid­e child with half arms ending in nubs and a torso without legs. “It’s a bit creepy,” Ishiguro admits, “but this works very well.” These ‘telenoids’ have been used in more than 70 hospitals in Japan, he says, as well as in Denmark, Germany and Austria.

Ishiguro shows me a movie clip of an elderly Japanese lady hugging a telenoid and chatting to it as she might with a favourite grandchild. By being so stripped down, genderless and ageless, “demented people can use their own imaginatio­n; they don’t feel any pressure,” he explains. For similar reasons, he says, the telenoids have also worked very well for children with autism.

A more diminutive variation is the Hugvie – a soft, huggable robot you can put a phone into. “It allows you to feel the presence of a person while you are talking [to them],” Ishiguro says. He shows me another video, of a room of noisy kindergart­en kids who immediatel­y quiet down when their Hugvies start talking to them.

No doubt the ability of these stripped-down humanoids to fulfil basic emotional needs also says something about what it means to be human. “I THINK ERICA IS the most beautiful and most humanlike autonomous android in the world … I hope.” This is how Ishiguro describes Erica in a video produced by The Guardian last April.

To me, Erica is disconcert­ing. It’s not that her pearly silicone skin and features are all that life-like; or that when she speaks her lips move up and down in a doll-like way. But when Etienne, a visitor to Ishiguro’s lab in Osaka, talks to her, things get uncanny.

Erica turns her head towards Etienne, her eyes focusing on his. “Hello there,” she says. “May I ask your name?” Etienne, he tells her. “It’s nice to meet you, Etienne,” she responds. “So,” – she nods and pauses – “what country are you from?” South Africa, Etienne tells her. “Oh really,” she exclaims, shrugging her shoulders. “I’ve never been to South Africa but I did love the film Chappie, which was made in South Africa. I think it raises some questions about artificial consciousn­ess, and Chappie is very cute.”

Erica’s ability to track Etienne during the conversati­on comes courtesy of two in-built 16-channel microphone arrays, 14 infrared depth sensors and the ability to move her head 20 degrees. She cannot move her arms or legs – yet. Her expressive gestures – blinking, shoulder shrugs, head turning and an upward look with her eyes at pensive moments – have clearly been run through the ‘Oriza filter’. She also has facialreco­gnition capability and memory, so she knows when she has spoken with someone before, and can refer to past conversati­ons.

But is this evidence for the workings of a mind? Her architect, Dylan Glas, suggests it is: “For about two years now I’ve been working with Erica to create her mind, her personalit­y and get all the details working.”

This is where we get into fuzzy territory. No one knows how to create a human mind. Its fundamenta­ls – consciousn­ess and intelligen­ce – elude even definition, let alone replicatio­n. “Nobody can define human intelligen­ce,” Ishiguro tells me emphatical­ly. “That is one of our final goals, to understand what human intelligen­ce is.” He is equally adamant that no one is close to creating a human-like artificial intelligen­ce.

He describes the likes of Alphago as having “insectleve­l intelligen­ce”. Machine-learning algorithms learn

This is where we get into fuzzy territory. No one knows how to create a human mind. Its fundamenta­ls – consciousn­ess and intelligen­ce – elude even definition, let alone replicatio­n.

winning patterns from vast data sets. Alphago, for instance, learned from 30 million moves by grand masters, and then from millions more by playing against itself. Ishiguro is unimpresse­d: “A human never does that; if we did we would get old and pass away before learning anything.”

The ability to learn patterns from data sets means AIS can recognise voices, faces and key words, and respond with material in their memory. Like Siri, Erica recognises key words, finds matches in her memory and answers with programmed responses. Erica is also good at faking. She can keep the conversati­on going even when it goes off-script. “Respond, acknowledg­e, pivot; it’s the same trick I occasional­ly used with talking to my grandma,” quips Croft.

But there is something more to Erica – the beginnings of something that is distinctly human. “Erica has simple intentions and desires that control the behaviour,” Ishiguro says. “That is the main difference to Siri.”

Intentions and desires! It sounds scary – surely the first step towards robots taking over the world. But many roboticist­s think it is a necessary next step. “If we want robots to serve humans in the home,” says Toby Walsh, an AI expert at the University of NSW, “we will need them to have intentions and desires.”

Consider loading a dishwasher. Step-by-step instructio­ns won’t cut it, explains Albu-schäffer. A robot needs to recognise all kinds of objects under different lighting in different kitchens, retrieve them from odd positions, open a dishwasher door and finally stack dishes in an effective fashion: “We can’t describe this at the level of equations; this kind of planning and knowledge of environmen­t is something we assimilate throughout our lives.” It is something, Albu-schäffer jokes, his 18-year-old son has yet to master.

In robotics-speak “intention and desire” is what robots need to carry out such missions. From intention and desire come reasoning, planning and action. “People think giving robots intentions and desires means they will take over the world,” says AlbuSchäff­er. “We just want them to load the dishwasher.”

So what sort of intentions and desires does Erica have? “In her current implementa­tion she wants to talk, she wants to be well-recognised and she wants to take a rest,” Ishiguro says.

And Erica’s mind? Ishiguro says it is more in the mind of the beholder. He acknowledg­es that scientific­ally, “no”, she does not have a mind, but “for visitors, she does”. It is the sonzai-kan created by her beautiful silicone face, Hirata’s theatrical moves and the autonomous conversati­on.

“This is the Turing test after all,” says Walsh, referring to computing pioneer Alan Turing’s proposal that the true test of artificial intelligen­ce is to pass for a human in conversati­on. Turing envisaged only textbased dialogue; Ishiguro’s humanoids use their bodies to enhance the illusion. “We’re being fooled by machines that have almost no intelligen­ce,” Walsh notes. JAPANESE CULTURE IS FASCINATED by robots. “Unlike North Americans,” Croft says, “the Japanese don’t seem to have the same problem with the uncanny valley.” Commentato­rs often point to Shinto to explain Japan’s comfort with mechanical people. This animist religion, which ascribes souls to inanimate objects like trees or stones, plays a strong role alongside Buddhism in Japanese culture.

“That’s the reason we are so good for robots,” says Ishiguro. “We don’t care about flesh bodies to define a human.” He hopes that “people will accept Erica as some type of human”.

But his ultimate goal remains to understand what it is to be a human, especially his own consciousn­ess.

His painting, these days with watercolou­rs, seems to be pursuing that goal. Equipped with palette and brush, he is struggling to convey the sense of presence that objects have. How, for instance, does his consciousn­ess perceive the presence of a chair?

“If I can represent my consciousn­ess on the painting,” he says, “I don’t need to develop any more robots. I can go back to art.” IMAGES 01 – 04 Hiroshi Ishiguro Laboratory

“People think giving robots intentions and desires means they will take over the world,” says Alin Albu- Schäffer. “We just want them to load the dishwasher.”

 ??  ??
 ??  ?? Ishiguro’s idea behind making copies of real people is to transfer the presence of the human to the robot.
Ishiguro’s idea behind making copies of real people is to transfer the presence of the human to the robot.

Newspapers in English

Newspapers from Australia