FACEOFF WITH TECH
APPLE’S SIGNATURE FEATURE ON ITS NEW iPHONE X, FACE ID, HAS SPARKED OFF CONCERNS ABOUT PRIVACY AND WHAT HAPPENS WHEN TECHNOLOGY GOES ALL WRONG.
Apple’s signature feature on its new iPhone X, Face ID, has sparked off concerns about privacy and what happens when technology goes all wrong.
isn’t the first time the face is going to be used for biometrics, but as Apple has jumped into the fray, you can bank on widespread adoption of the technology. Apple’s implementation of Face ID isn’t easy to mimic. It involves an entire system with an infrared camera, dot projector, proximity sensor, and ambient light sensor. The thing is called TrueDepth; it projects 30,000 dots to make up a 3D model of the face.
People used to laugh at phones that could be fooled by unlocking when you used a photo of the owner, but Apple’s Face ID also has its own dangers. A person being mugged, for example, could just be forced to look at the phone to unlock it.
Facial recognition is only one aspect of machine perception. The next is to have machines interpret what they see. Putting machine perception together with machine learning will then result in applications that will impact everyday lives. This is already happening.
Several airports, including Zurich and Amsterdam, are using biometric face recognition at border control so that no time is wasted physically handing over passports to airport personnel.
But brace yourself: some bizarre uses are also coming up. At a park in Beijing, face recognition is hooked up to a toilet paper dispenser, which rolls out just 27 inches of the precious material per person. You can’t go in for another round before nine minutes are up. Also in China, KFC uses the tech to give food suggestions and traffic authorities use it to combat jaywalking. It’s being used by retail to sell you what it thinks you need.
As the technology becomes more developed, it could prove dangerous in the hands of authoritarian regimes and rogue agencies. The FBI, for example, has been accused of using face recognition data without public declarations of privacy implications.
The cleverer it gets, the more dangerous it becomes. Two researchers recently used AI to see if they could detect a person’s sexuality from their faces. The algorithm could distinguish between gay and heterosexual men 81 per cent of the time, and gay and heterosexual women 71 per cent of the time, outperforming human judges. Privacy advocates are right to raise a hue and cry.