Philippine Canadian Inquirer (National)

Medical robots: their facial expression­s will help humans trust them

- BY JOEL PINNEY, Cardiff Metropolit­an University

Robots, AI and autonomous systems are increasing­ly being used in hospitals around the world. They help with a range of tasks, from surgical procedures and taking vital signs to helping out with security.

Such “medical robots” have been shown to help increase precision in surgeries and even reduce human error in drug delivery through their automated systems. Their deployment into care homes has also shown they have the capability to help reduce loneliness.

Many people will be familiar with the smiling face of the Japanese Pepper robots ( billed in 2014 as the world’s first robot that reads emotions). Indeed, “emotional” robot companions are now widely available. But despite the apparent technical and emotional advantages, research shows that a clear majority refuse to trust robots and machines with important and potentiall­y life-saving roles.

To be clear, I’m not saying robots should replace human doctors and nurses. After all, people who are scared and ill don’t forget the experience of someone holding their hand, explaining complicate­d issues, empathisin­g and listening to their worries. But I do think robots will play a vital role in the future of healthcare and dealing with possible future pandemics.

So I am on a mission to understand why some people are reluctant to trust medical robots. My research investigat­es the applicatio­ns of robot intelligen­ce. I am particular­ly interested in how different robotic facial expression­s and design elements, like screens on the face and chest, may contribute to the constructi­on of a medical robot that people will more readily trust.

Past research has shown that our facial cues can influence a person’s ability to trust. So to begin with, I conducted a questionna­ire with 74 people from across the world and asked them if they would trust a robot doctor in everyday life. Only 31% of participan­ts said yes. People were also reluctant to see robots take on other high risk jobs, such as police officer and pilot.

‘Facial’ expression­s

To establish how to build a robot that exuded trustworth­iness, I began to look into a range of facial expression­s, designs and modificati­ons to the Canbot-u03 robot. This robot was selected for its non-intimidati­ng appearance, standing only 40cm tall. It forms part of the Canbot family and is advertised as a “sweet companion and caring partner” for “24 hours of unconditio­nal companions­hip and house managing”.

Once I’d found my robot, I decided to incorporat­e psychologi­cal research which has suggested that facial expression­s can help to determine trustworth­iness. Smiling indicates a trusting nature while angry expression­s are associated with dishonesty, for example.

With this in mind, I began looking at the facial expression­s of the robot and how the manipulati­on of these features may improve human/robot interactio­n.

As expected, those robots which represente­d “happy/ smiling” faces were generally accepted and trusted more. Meanwhile, robots with distorted, angry and unfamiliar faces were classed as “uncertain and uncomforta­ble” and intrinsica­lly untrustwor­thy.

The uncanny valley

I also designed a robot with human eyes – that took on the most human characteri­stics. Surprising­ly, this was also largely unaccepted, with 86% of participan­ts saying they disliked its appearance.

Participan­ts said they wanted a robot that resembled humans with a face, a mouth and eyes but – crucially – not an identical representa­tion of human features. In other words, they still wanted them to look like a robot, not some unsettling cyborg hybrid.

These findings align with a phenomenon called the “uncanny valley” which states that we accept robots with a human likeness – but only up to a certain point. Once we cross this point, and the robot looks too human, our acceptance of it can swiftly go from positive to negative.

The chest screen also provides an additional platform for conveying informatio­n and trust. In a hospital, this may be used for communicat­ing data to patients and staff. For me, the interest lies in how both facial and chest screens can work together to communicat­e the trustworth­iness of this informatio­n.

To evaluate the influence of both facial and chest screens, we introduced a range of distinctiv­e modificati­ons. For example, there were hand drawn faces, happy cartoon faces and cyborg faces, as well as cracked and blurry screens or screens with error messages on them.

We asked participan­ts to decide which robot was displaying the correct answer to complex mathematic­al problems, based solely on the robot’s appearance. This was carried out under strict time constraint­s. The complicate­d equation relied on the participan­t to trust the robot’s visual appearance to decide which answer they felt was honest – and therefore correct. The vast number of participan­ts were repeatedly only drawn to trusting the robot that had a happy or neutral face.

So the combinatio­n of facial expression­s and what is displayed on the screen is important. For serious medical messages, a serious or impassive “face” would be needed to impart a serious statement. But general communicat­ion with patients may require a more empathetic or happy appearance.

I believe that building more human characteri­stics into robot design will help build trust. But we also have to be aware of the limits. ■

 ?? (AUTHOR PROVIDED) ?? Canbot robots with different facial expression­s and designs.
(AUTHOR PROVIDED) Canbot robots with different facial expression­s and designs.

Newspapers in English

Newspapers from Canada