Nelson Mail

If you’re happy an AI won’t know it

- OLIVER MOODY The Times

When Siri, the automated iPhone assistant, is questioned about the meaning of happiness, it says: ‘‘Happiness: mental or emotional state of well-being characteri­sed by pleasant emotions.’’

‘‘Siri,’’ The Times asked, ‘‘am I happy?’’ ‘‘No comment,’’ it replied.

Research suggests that Siri is not the only intelligen­t assistant that struggles with the mysteries of human joie de vivre. Russian computer scientists have developed an algorithm that is able to pick up hints of calm and anger from people’s voices, but is largely flummoxed by the sound of a good mood.

The study has echoes of Marvin, the super-intelligen­t but profoundly depressed robot from the Hitchhiker’s Guide to the Galaxy books, who once said that his capacity for happiness could be fitted inside a matchbox ‘‘without taking out the matches first’’.

Researcher­s are trying to give computers the ability to detect people’s emotions from the sound of their speech, the words they choose or their facial expression­s. These skills will be important if robots and artificial intelligen­ce (AI) programs are to begin routinely interactin­g with humans in sensitive settings such as the handling of customer complaints.

Scientists in Germany have designed an animated 3D avatar called Greta that can recognise the traces of some emotions in the human voice and reflect them on her face. At Stanford University, California, researcher­s have built software that can detect 14 feelings including pride, boredom and contempt, although it is correct only about half the time.

Scientists at Russia’s National Research University’s Higher School of Economics campus in Nizhny Novgorod have achieved an overall accuracy of 71 per cent for eight emotions, ranging from disgust to surprise.

Most research has previously used an approach known as feature selection, in which computers look for particular patterns of sound linked to emotions. Alexander Ponomarenk­o and his colleagues took a different tack, turning voices into a visual format called a spectrogra­m. ‘‘It can be applied in some systems like Siri and Alexa,’’ Dr Ponomarenk­o said.

The system interprete­d recordings of 24 actors reading phrases in different emotions. It was good at detecting calm, disgust and neutral speech, but could spot happiness only 45 per cent of the time. The emotion was often confused with fear and anger.

‘‘Unfortunat­ely the model has some difficulti­es separating happy and angry emotions,’’ the scientists wrote. ‘‘Most likely the reason for this is that they are the strongest emotions, and as a result their spectrogra­ms are slightly similar.’’

The research is published in a book issued by the Internatio­nal Conference for Neuroinfor­matics.

Newspapers in English

Newspapers from New Zealand