Orlando Sentinel

New tech problem must be faced

Concerns grow over likely bias in facial recognitio­n

- By Dina Bass

An unusual consensus emerged recently between artificial intelligen­ce researcher­s, activists, lawmakers and many of the largest technology companies: Facial recognitio­n software breeds bias, risks fueling mass surveillan­ce and should be regulated.

Deciding on effective controls and acting on them will be a lot harder.

This month, the Algorithmi­c Justice League and the Center of Privacy & Technology at Georgetown University Law Center unveiled the Safe Face Pledge, which asks companies not to provide facial AI for autonomous weapons or sell to law enforcemen­t unless explicit laws are debated and passed to allow it.

Microsoft said the software carries significan­t risks and proposed rules to combat the threat. Research group AI Now, which includes AI researcher­s from Google and other companies, issued a similar call.

“Principles are great — they are starting points. Beyond the principles we need to be able to see actions,” said Joy Buolamwini, founder of the Algorithmi­c Justice League.

None of the biggest makers of the software — companies such as Microsoft, Google, Amazon.com, Facebook and IBM — has signed the Safe Face Pledge.

Large tech companies may be reluctant to commit to a pledge like this, even if they’re concerned about negative consequenc­es of the software.

That’s because it could mean walking away from lucrative contracts for the emerging technology. The market for video surveillan­ce gear is worth $18.5 billion a year, and AI-powered equipment for new forms of video analysis is an important emerging category, according to researcher IHS Markit. Microsoft and Facebook said they’re reviewing the pledge. Google declined to comment.

“There are going to be some large vendors who refuse to sign or are reluctant to sign because they want these government contracts,” said Laura Moy, executive director of the Center on Privacy & Technology.

Microsoft is still selling facial recognitio­n software to government­s, a fact that the American Civil Liberties Union took the company to task for last week. It asked Microsoft to halt the sales and join the organizati­on’s call for a federal moratorium on government use of the technology.

The use of facial recognitio­n for surveillan­ce, policing and immigratio­n is being questioned because researcher­s, including Buolamwini, have shown the technology isn’t accurate enough for crucial decisions and performs worse on darker-skinned people.

Providers have responded differentl­y to the scrutiny. Microsoft is defending government contracts generally, while asking for laws to regulate the space.

Amazon took issue with research by the ACLU into the Rekognitio­n program it sells to police department­s, but the company has also said it’s working to better educate police on how to use the software.

Companies including Microsoft, Facebook and Axon, a maker of police body cameras, have formed AI ethics boards, and Google published a set of moregenera­l AI principles in June.

The Safe Face Pledge asks companies to “show value for human life, dignity and rights, address harmful bias, facilitate transparen­cy” and make these commitment­s part of their business practices. This includes not selling facial recognitio­n software to identify targets where lethal force may be used.

The pledge also commits companies to halt sales of face AI products that are not “subject to public scrutiny, inspection, and oversight.”

There are also commitment­s to internal bias reviews as well as checks by outside experts, along with a requiremen­t to publish easyto-understand informatio­n on how these technologi­es are used and by which customers.

Startups Simprints Technology, Robbie AI Inc. and Yoti Ltd. were the inaugural signers of the pledge.

“It’s kind of the wild west when it comes to use of automated facial analysis technology, and it’s also an area that’s shrouded in secrecy,” Moy said. The Safe Face pledge tries to address both areas, but Moy also believes new laws are needed.

That’s where Microsoft is focusing its attention. The company recently detailed the laws it would like to see passed. Microsoft president and chief legal officer Brad Smith put the chances of federal legislatio­n in 2019 at 50-50, most likely as part of a broader privacy bill.

But he said there’s a far better shot at getting something passed in a state or even a city next year. If it’s an important enough region, say California, that would probably be enough to make software sellers change their products and practices overall, he said.

In the meantime, Microsoft said it will turn down some AI contracts where it has concerns, and already has. Smith wouldn’t specify which deals it has rejected, and he has also said Microsoft will continue to be a key vendor to the U.S. government.

“We’ve turned down business when we thought there was too much risk of discrimina­tion, when we thought there was a risk to the human rights of individual­s,” Smith said.

Amazon thinks it’s too soon to regulate.

“There are many positive and important uses of this technology that are being implemente­d today, to include preventing human traffickin­g, reuniting missing children with their parents, and improving security,” the company said in a statement. “It is too early to come out with blanket statements supporting broad regulation, given this technology is in the early stages of deployment, and we have received no indication­s of misuse.”

Still, the company said it will work with government­s on standards and guidelines for the technology to maintain privacy and civil liberties.

Facebook said it’s committed to using the technology responsibl­y and supports thoughtful proposals. The social-networking company, which uses face recognitio­n to identify people in photos users post, said it’s eager to work with Microsoft and others.

 ?? QILAI SHEN/BLOOMBERG NEWS ?? Employees look into facial recognitio­n devices and swipe their badges to enter the assembly line area at a Pegatron factory in Shanghai, China. Concerns are growing over the possible bias that facial recognitio­n brings.
QILAI SHEN/BLOOMBERG NEWS Employees look into facial recognitio­n devices and swipe their badges to enter the assembly line area at a Pegatron factory in Shanghai, China. Concerns are growing over the possible bias that facial recognitio­n brings.

Newspapers in English

Newspapers from United States