Faces stump humans, not computer
Computer scientists are trying to teach an algorithm to tell the difference between Chinese, Japanese and Korean faces.
Staff at at the University of Rochester in the US wanted to explore how advancements in artificial intelligence have made it easier for computers to interpret pictures in sophisticated ways.
But, intentionally or not, their research taps into the uncomfortable history of how Asians have struggled to fit into Western life. The scientists were inspired by a quiz created by Japanese-American web designer Dyske Suematsu.
Fifteen years ago, Suematsu decided, half-jokingly, to investigate the stereotype that Asians all look alike. He threw a party in New York City and invited Asian friends. He put their portraits on the internet and asked strangers to guess their ethnicity.
The website was a huge hit, quickly becoming one of the web’s first viral sensations.
Suematsu says that millions have registered and taken the test. On average, people identify 7 out of 18 photos correctly – an accuracy rate of about 39 per cent. That’s barely better than pure guessing, which would yield an accuracy rate of 33 per cent, on average.
‘‘This is a challenging task even for humans,’’ said Jiebo Lu, a professor of computer science at the University of Rochester. ‘‘I asked some of my students to take the test and they all failed horribly – even though all of them were Asian.’’
Lu and his students suspected that a trained artificial intelligence might be able to perform as well, or even better.
Recently, they collected hundreds of thousands of pictures of East Asian faces and fed them through an algorithm to figure out just what made Chinese, Japanese, and Korean people look different. In a draft report detailing their results, they provide samples of the pictures fed to the computer.
But despite what they expected to be a difficult task, the scientists were surprised to discover that the computer could achieve accuracy rates of more than 75 per cent.
This is far better than humans performed on Suematsu’s quiz. The computer’s advantage is that it could draw on a vast library of faces, Lu explained. ‘‘Our machine has seen far more examples than any living person,’’ he said.
Lack of experience is a major reason why humans sometimes struggle to tell foreigners apart. Psychologists call it the ‘‘crossrace’’ effect: We are much better at distinguishing members of our own race or ethnicity than members of other races or ethnicities.
Studies suggest that with training, people can improve at recognising the faces of people from different ethnic backgrounds. As Lu and his colleagues have demonstrated, computers might even be better than we are at noticing some of these subtle distinctions.
It’s not all about physical proportions. When the scientists went to investigate how the computer was making its decisions, they discovered an interesting pattern.
Many of the cues that stood out to the algorithm were cultural features, like hairstyles or glasses or facial expressions. This makes sense, since the people of China, Japan and Korea have somewhat shared ancestries, but distinct senses of fashion.
Without being told, the computer seemed to realise that our concepts of race and national identity transcend genetics – they are cultural ideas.
Lu imagines that this kind of research might one day be used in targeted ads or counter-terrorism. Being able to discern a person’s nationality from their profile photo would help marketers better tailor online messages.
Or, in a more Orwellian context, airports could set up cameras to racially profile people in the name of homeland security.
The work of Lu’s lab rebukes the lazy notion that Asians all look the same.
If a software routine can be trained to easily recognise the differences between a Chinese person, a Japanese person, and a Korean person, then that challenges Westerners to pay close attention, to work harder to understand the diverse mix of people. Washington Post