Bangkok Post

Faces Are the Next Target for Fraudsters

Hackers are pioneering new ways of tricking facial-recognitio­n systems, from cutting the eyes out of photos to making a portrait ‘nod’ with artificial intelligen­ce

- PARMY OLSON Liza Lin and Qianwei Zhang contribute­d to this article.

Facial-recognitio­n systems, long touted as a quick and dependable way to identify everyone from employees to hotel guests, are in the crosshairs of fraudsters. For years, researcher­s have warned about the technology’s vulnerabil­ities, but recent schemes have confirmed their fears — and underscore­d the difficult but necessary task of improving the systems.

In the past year, thousands of people in the U.S. have tried to trick facial identifica­tion verificati­on to fraudulent­ly claim unemployme­nt benefits from state workforce agencies, according to identity verificati­on firm ID.me Inc.

The company, which uses facial-recognitio­n software to help verify individual­s on behalf of 26 U.S. states, says that between June 2020 and January 2021 it found more than 80,000 attempts to fool the selfie step in government ID matchups among the agencies it worked with.

That included people wearing special masks, using deepfakes — lifelike images generated by AI — or holding up images or videos of other people, says ID.me chief executive Blake Hall.

Facial recognitio­n for one-to-one identifica­tion has become one of the most widely used applicatio­ns of artificial intelligen­ce, allowing people to make payments via their phones, walk through passport checking systems or verify themselves as workers.

Drivers for Uber Technologi­es Inc., for instance, must regularly prove they are licensed account holders by taking selfies on their phones and uploading them to the company, which uses Microsoft Corp.’s facial-recognitio­n system to authentica­te them.

Uber, which is rolling out the selfie-verificati­on system globally, did so because it had grappled with drivers hacking its system to share their accounts.

Uber declined to comment. Amazon.com Inc. and smaller vendors like Idemia Group S.A.S., Thales Group and AnyVision Interactiv­e Technologi­es Ltd. sell facialreco­gnition systems for identifica­tion. The technology works by mapping a face to create a so-called face print. Identifyin­g single individual­s is typically more accurate than spotting faces in a crowd.

Still, this form of biometric identifica­tion has its limits, researcher­s say.

WHY CRIMINALS ARE FOOLING FACIAL RECOGNITIO­N

Analysts at credit-scoring company Experian PLC said in a March security report that they expect to see fraudsters increasing­ly create “Frankenste­in faces,” using AI to combine facial characteri­stics from different people to form a new identity to fool facial ID systems.

The analysts said the strategy is part of a fastgrowin­g type of financial crime known as synthetic identity fraud, where fraudsters use an amalgamati­on of real and fake informatio­n to create a new identity.

Until recently, it has been activists protesting surveillan­ce who have targeted facial-recognitio­n systems. Privacy campaigner­s in the U.K., for instance, have painted their faces in asymmetric makeup specially designed to scramble the facial-recognitio­n software powering cameras while walking through urban areas.

Criminals have more reasons to do the same, from spoofing people’s faces to access the digital wallets on their phones, to getting through highsecuri­ty entrances at hotels, business centers or hospitals, according to Alex Polyakov, the CEO of Adversa.ai, a firm that researches secure AI.

Any access control system that has replaced human security guards with facial-recognitio­n cameras is potentiall­y at risk, he says, adding that he has confused facial-recognitio­n software into thinking he was someone else by wearing specially designed glasses or Band-Aids.

A GROWING THREAT

The idea of fooling these automated systems dates back several years.

In 2017, a male customer of insurance company Lemonade tried to fool its AI for assessing claims by dressing in a blond wig and lipstick, and uploading a video saying his $5,000 camera had been stolen.

Lemonade’s AI systems, which analyze such videos for signs of fraud, flagged the video as suspicious and found the man was trying to create a fake identity.

He had previously made a successful claim under his normal guise, the company said in a blog post.

Lemonade, which says on its website that it uses facial recognitio­n to flag claims submitted by the same person under different identities, declined to comment.

Earlier this year, prosecutor­s in China accused two people of stealing more than $77 million by setting up a fake shell company purporting to sell leather bags and sending fraudulent tax invoices to their supposed clients.

The pair was able to send out official-looking invoices by fooling the local government tax office’s facial-recognitio­n system, which was set up to track payments and crack down on tax evasion, according to prosecutor­s cited in a March report in the Xinhua Daily Telegraph.

Prosecutor­s said in a posting on the Chinese chat service WeChat that the attackers had hacked the local government’s facial-recognitio­n service with videos they had produced.

The Shanghai prosecutor­s couldn’t be reached for comment.

The pair bought high-definition photograph­s of faces from an online black market, then used an app to create videos from the photos to make it look like the faces were nodding, blinking and opening their mouths, the report says.

The duo, who had the surnames Wu and Zhou, used a special mobile phone that would turn off its front-facing camera and upload the manipulate­d videos when it was meant to be taking a video selfie for Shanghai’s tax system, which uses facial recognitio­n to authentica­te tax returns, the report says.

Wu and Zhou had been operating since 2018, according to prosecutor­s.

Spoofing a facial-recognitio­n system doesn’t always require sophistica­ted software, according to John Spencer, chief strategy officer of biometric identity firm Veridium LLC.

One of the most common ways of fooling a face-ID system, or carrying out a so-called presentati­on attack, is to print a photo of someone’s face and cut out the eyes, using the photo as a mask, he says.

Many facial-recognitio­n systems, such as the ones used by financial trading platforms, check to see if a video shows a live person by examining their blinking or moving eyes.

Most of the time, Mr. Spencer says, his team could use this tactic and others to test the limits of facial-recognitio­n systems, sometimes folding the paper “face” to give it more perceived depth.

“Within an hour I break almost all of [these systems],” he says.

Apple Inc.’s Face ID, which was launched in 2017 with the iPhone X, is among the most difficult to fool, according to scientists.

Its camera projects more than 30,000 invisible dots to create a depth map of a person’s face, which it then analyzes, while also capturing an infrared image of the face.

Using the iPhone’s chip, it then processes that image into a mathematic­al representa­tion, which it compares with its own database of a user’s facial data, according to Apple’s website.

An Apple spokeswoma­n says that the company’s website states that for privacy reasons, Face ID’s data never leaves an iPhone.

Some banks and financial-services companies use third-party facial-identifica­tion services, not Apple’s Face ID system, to sign up customers on their iPhone apps, Mr. Spencer says. This is potentiall­y less accurate.

“You end up looking at regular cameras on a mobile phone,” he says. “There’s no infrared capability, no dot projectors.”

Many online-only banks ask users to upload video selfies alongside a photo of their driver’s licenses or passports, and then use a third party’s facial-recognitio­n software to match the video to the ID.

The images sometimes go to human reviewers if the system flags something wrong, Mr. Spencer says.

SEEKING A SOLUTION

Mr. Polyakov regularly tests the security of facial-recognitio­n systems for his clients and says there are two ways to protect such systems from being fooled.

One is to update the underlying AI models to beware of novel attacks by redesignin­g the algorithms that underpin them. The other is to train the models with as many examples as possible of the altered faces that could spoof them, known as adversaria­l examples.

Unfortunat­ely, it can take 10 times the number of images needed to train a facial-recognitio­n model to also protect it from spoofing — a costly and time-consuming process.

“For each human person you need to add the person with adversaria­l glasses, with an adversaria­l hat, so that this system can know all combinatio­ns,” Mr. Polyakov says.

Companies such as Google, Facebook and Apple are working on finding ways to prevent presentati­on attacks, according to Mr. Polyakov’s firm’s analysis of more than 2,000 scientific research papers about AI security.

Facebook, for instance, said last month that it is releasing a new tool for detecting deepfakes.

ID.me’s Mr. Hall says that by this past February, his company was able to stop almost all of the fraudulent selfie attempts on the government sites, bringing the number that got through down to single digits from among millions of claims.

The company got better at detecting certain masks by labeling images as fraudulent, and by tracking the device, IP addresses and phone numbers of repeat fraudsters across multiple fake accounts, he says.

It also now checks how the light of a smartphone reflects and interacts with a person’s skin or another material. The attempts at facespoofi­ng have also declined.

“[Attackers] are typically unwilling to use their real face when committing a crime,” Mr. Hall says.

 ?? ID.ME ?? Thousands of people have used masks and dummies like these to try and trick facial identifica­tion verificati­on, according to ID.me.
ID.ME Thousands of people have used masks and dummies like these to try and trick facial identifica­tion verificati­on, according to ID.me.

Newspapers in English

Newspapers from Thailand