Bangkok Post

How do computers ‘see’ faces and other objects?

- MATT O’BRIEN

Computers started to be able to recognise human faces in images decades ago, but now artificial intelligen­ce systems are rivalling people’s ability to classify objects in photos and videos.

That’s sparking increased interest from government agencies and businesses, which are eager to bestow vision skills on all sorts of machines. Among them: self-driving cars, drones, personal robots, in-store cameras and medical scanners that can search for skin cancer. There are also our own phones, some of which can now be unlocked with a glance.

HOW DOES IT WORK?

Algorithms designed to detect facial features and recognise individual faces have grown more sophistica­ted since early efforts decades ago.

A common method has involved measuring facial dimensions, such as the distance between the nose and ear or from one corner of the eye to another. That informatio­n can then be broken down into numbers and matched to similar data extracted from other images. The closer they are, the better they match.

Such analysis is now aided by greater computing power and huge troves of digital imagery that can be easily stored and shared.

FROM FACES TO OBJECTS (AND PETS)

“Face recognitio­n is an old topic. It’s always been pretty good. What really got everyone’s attention is object recognitio­n,” says Michael Brown, a computer science professor at Toronto’s York University who helps organise the annual Conference on Computer Vision and Pattern Recognitio­n.

Research over the past decade has focused on the developmen­t of brain-like neural networks that can automatica­lly “learn” to recognise what’s in an image by looking for patterns in big data sets. But humans continue to help make machines smarter by labelling photos, as happens when Facebook users tag a friend. An annual image recognitio­n competitio­n that lasted from 2010 to 2017 drew top researcher­s from companies like Google and Microsoft. Among the revelation­s: computers can do better than humans at distinguis­hing between various Welsh corgi breeds, in part because they’re better able to quickly absorb the knowledge it takes to make those distinctio­ns.

But computers have been confused by more abstract forms, such as statues.

THE ‘CODED GAZE’

The growing use of face recognitio­n by law enforcemen­t has highlighte­d long-standing concerns about racial and gender bias.

A study led by MIT computer scientist Joy Buolamwini found that face recognitio­n systems built by companies including IBM and Microsoft were much more likely to misidentif­y darker-skinned people, especially women. (Buolamwini called this effect “the coded gaze”.) Both Microsoft and IBM recently announced efforts to make their systems less biased by using bigger and more diverse photo repositori­es to train their software.

 ??  ?? A customer tries out the face recognitio­n feature on an iPhone X in Singapore.
A customer tries out the face recognitio­n feature on an iPhone X in Singapore.

Newspapers in English

Newspapers from Thailand