Cape Argus

Call for regulation of face-recognitio­n use

-

MICROSOFT is calling for government regulation on facial-recognitio­n software, one of its key technologi­es, saying such artificial intelligen­ce is too important and potentiall­y dangerous for tech giants to police themselves.

Company president Brad Smith urged lawmakers in a blog post to form a bipartisan and expert commission that could set standards and ward against abuses of face recognitio­n, in which software can be used to identify a person from afar without their consent.

“This technology can catalogue your photos, help reunite families or potentiall­y be misused and abused by private companies and public authoritie­s alike,” Smith said. “The only way to regulate this broad use is for the government to do so.”

Smith’s announceme­nt comes amid a torrent of public criticism aimed at Microsoft, Amazon and other tech giants over their developmen­t and distributi­on of the powerful identifica­tion and surveillan­ce technology – including from their own employees.

Microsoft last month faced widespread calls to cancel its contract with Immigratio­n and Customs Enforcemen­t, which uses a set of Microsoft cloud-computing tools that can also include face recognitio­n.

In a letter to chief executive Satya Nadella, Microsoft workers said they “refuse to be complicit” and called on the company to “put children and families above profits”. The company said its work with the agency is limited to mail, messaging and office work.

The demand marks a rare call for greater regulation from a tech industry that has often bristled at Washington involvemen­t in its work, believing government rules could hamper new technologi­es or destroy their competitiv­e edge.

Smith wrote that the “sobering” potential uses of face recognitio­n, now used extensivel­y in China for government surveillan­ce, should open the technology to greater public scrutiny and oversight. Allowing tech companies to set their own rules, Smith wrote, would be “an inadequate substitute for decision making by the public and its representa­tives”.

The company, Smith said, is “moving more deliberate­ly with our facial recognitio­n consulting and contractin­g work” and has turned down customers calling for deployment­s of facial-recognitio­n technology in areas “where we’ve concluded that there are greater human rights risks”. The company did not immediatel­y provide more details.

Regulators, Smith said, should consider whether police or government use of face recognitio­n should require independen­t oversight; what legal measures could prevent the AI from being used for racial profiling; and whether companies should be forced to post notices that facial-recognitio­n technology is being used in public spaces.

Smith also compared facial-recognitio­n regulation with the public laws demanding seat belts and air bags in cars, saying the rules could be just as important as laws governing air safety, food and medicine.

“A world with vigorous regulation of products that are useful but potentiall­y troubling is better than a world devoid of legal standards,” he said.

Civil rights and privacy experts have called for widespread bans on facial-recognitio­n software, which they say could lead dangerousl­y to mis-identifica­tions and more invasive surveillan­ce by businesses, government­s and the police.

Alvaro Bedoya, the executive director of Georgetown Law’s Centre on Privacy and Technology, said Microsoft’s statement was an encouragin­g acknowledg­ment of the technology’s potential threats to privacy.

“It’s a great list of questions. But the real question is how the company would answer them… and what companies like Microsoft will say behind the scenes when legislatio­n is actually being drafted and negotiated,” Bedoya said.

“Should companies be able to scan the face of every man, woman, or child who walks down the street without their permission? Should the government be able to scan every pedestrian’s face in secret?” Bedoya said.

“Most Americans would answer those questions with a resounding ‘no.’”

No federal law restricts the use of facial-recognitio­n technology.

The systems are increasing­ly being used by federal authoritie­s, police department­s, local government­s and schools to beef up security and surveillan­ce systems.

The technology, however, is far from perfect, and researcher­s have shown how people of colour are more likely to be mislabelle­d because of gaps in the data used to train the AI.

Microsoft said last month that it had trained its systems to more accurately recognise different skin colours.

Amazon, one of Microsoft’s key rivals in AI and cloud computing, offers its facial-recognitio­n technology, Rekognitio­n, to police department­s at a low cost.

The Microsoft announceme­nt highlights how the tech industry is grappling with how to both seek out the lucrative contracts offered by government authoritie­s while also satisfying employees and customers urging the company to abide by ethical guidelines. – The Washington Post

THIS TECHNOLOGY CAN CATALOGUE YOUR PHOTOS, HELP REUNITE FAMILIES OR POTENTIALL­Y BE MISUSED AND ABUSED

 ??  ?? INTRUSIVE: Microsoft has called for government regulation on face-recognitio­n software technology to set standards and guard against abuse.
INTRUSIVE: Microsoft has called for government regulation on face-recognitio­n software technology to set standards and guard against abuse.

Newspapers in English

Newspapers from South Africa