Mi­crosoft: Reg­u­late face recog­ni­tion

Soft­ware gi­ant wants to shift over­sight to Con­gress


Mi­crosoft wants Con­gress to reg­u­late fa­cial recog­ni­tion tech­nol­ogy as con­cerns grow that it could be used to in­vade pri­vacy and im­prop­erly mon­i­tor peo­ple.

Some civil-lib­er­ties groups and em­ploy­ees have called on tech com­pa­nies to re­strict the use of fa­cial recog­ni­tion, but Mi­crosoft Pres­i­dent Brad Smith wrote in a lengthy blog post Fri­day that the “only way” to man­age and reg­u­late the con­tro­ver­sial tech­nol­ogy is for gov­ern­ment to do it.

Em­ploy­ees at Mi­crosoft, Google and Ama­zon pres­sured each of their com­pa­nies in re­cent weeks to cut ties with some gov­ern­ment agen­cies that they be­lieve are vi­o­lat­ing civil lib­er­ties.

Some Mi­crosoft work­ers ques­tioned whether the com­pany was pro­vid­ing its fa­cial recog­ni­tion tech­nol­ogy, Face API, to Im­mi­gra­tion and Cus­toms En­force­ment amid pub­lic out­cry about the agency sep­a­rat­ing chil­dren from their migrant par­ents at the bor­der. That’s not hap­pen­ing, Smith wrote in his blog, echo­ing the com­pany’s pre­vi­ous state­ment that its con­tract with ICE in­volves sup­port­ing email and cal­en­dar sys­tems.

Fa­cial recog­ni­tion tech­nol­ogy — en­abled by ubiq­ui­tous cam­eras and in­creas­ingly ac­cu­rate im­age anal­y­sis soft­ware — has risen quickly into the pub­lic con­scious­ness to be­come a top dig­i­tal-pri­vacy con­cern. The tech­nol­ogy has spurred wor­ries that it could be used by gov­ern­ments to widely mon­i­tor peo­ple with­out their knowl­edge or con­sent.

Many peo­ple, in­clud­ing some em­ploy­ees, have called on the com­pa­nies de­vel­op­ing the tech­nol­ogy to put re­stric­tions on its use. But that’s not the place of tech com­pa­nies, Smith wrote.

“While we ap­pre­ci­ate that some peo­ple to­day are calling for tech com­pa­nies to make these de­ci­sions — and we rec­og­nize a clear need for our own ex­er­cise of re­spon­si­bil­ity, as dis­cussed fur­ther be­low — we be­lieve this is an in­ad­e­quate sub­sti­tute for de­ci­sion mak­ing by the pub­lic and its rep­re­sen­ta­tives in a demo­cratic repub­lic,” he wrote.

Tech com­pa­nies have long been in­vent­ing new sys­tems and ser­vices faster than the gov­ern­ment can an­tic­i­pate and reg­u­late them. That of­ten means ser­vices are fairly ma­ture and widely adopted be­fore the gov­ern­ment steps in. Even then, tech com­pa­nies of­ten re­sist at­tempts to reg­u­late their prod­ucts and ser­vices.

Some be­lieve the in­dus­try needs to ac­cept more re­spon­si­bil­ity for its in­ven­tions. Atti Ri­azi, chief in­for­ma­tion tech­nol­ogy of­fi­cer at the United Na­tions, told Bloomberg this spring that tech com­pa­nies had a duty to con­sider un­in­tended con­se­quences. “You can’t just cre­ate and in­no­vate with­out think­ing,” she said.

Smith’s post ad­dressed some things the com­pany is do­ing to en­sure that fa­cial recog­ni­tion is ac­cu­rate and eth­i­cal, in­clud­ing “go­ing more slowly” as it rolled out the tech­nol­ogy.

“‘Move fast and break things’ be­came some­thing of a mantra in Sil­i­con Val­ley ear­lier this decade,” Smith wrote. “But if we move too fast with fa­cial recog­ni­tion, we may find that peo­ple’s fun­da­men­tal rights are be­ing bro­ken.”

Mi­crosoft has turned down some cus­tomer re­quests for the ser­vice where it found there could be “greater hu­man rights risks,” Smith said.

Mi­crosoft also has estab­lished both an in­ter­nal and ex­ter­nal ethics panel for ar­ti­fi­cial in­tel­li­gence, and pub­lished a short book with guide­lines and goals for AI ear­lier this year.

Fa­cial recog­ni­tion tech­nol­ogy has been de­vel­oped by mul­ti­ple com­pa­nies and is used in ev­ery­day apps, such as photo or­ga­niz­ers on phones, to cat­e­go­rize pictures by peo­ple’s faces or to sug­gest who to tag in a Face­book post. It is also start­ing to be used by law-en­force­ment agen­cies to catch crim­i­nals and find miss­ing peo­ple.

That law-en­force­ment use came un­der fire in late May when the Amer­i­can Civil Lib­er­ties Union crit­i­cized Ama­zon for sell­ing its tech­nol­ogy, Rekog­ni­tion, to gov­ern­ment en­ti­ties, say­ing it could vi­o­late peo­ple’s civil lib­er­ties and be eas­ily abused.

“Peo­ple should be free to walk down the street with­out be­ing watched by the gov­ern­ment,” the ACLU wrote in a let­ter to Ama­zon CEO Jeff Be­zos.

Sur­veil­lance wor­ries are not un­founded. In China, fa­cial recog­ni­tion tech­nol­ogy is used to dis­play the faces of jay­walk­ers on big out­door screens to em­bar­rass them.

Barry Fried­man, a pro­fes­sor at New York Univer­sity School of Law, said the use of fa­cial recog­ni­tion is ex­actly the type of tool that the gov­ern­ment should reg­u­late be­cause of the na­ture of the “con­tro­ver­sial and com­pli­cated” tech­nol­ogy. Fried­man serves as di­rec­tor of the school’s Polic­ing Project, which en­cour­ages po­lice de­part­ments to be trans­par­ent and work with com­mu­ni­ties to make poli­cies.

“This is one of those sit­u­a­tions where if we don’t get a han­dle on it, it’s go­ing to be hard to get the tooth­paste back in the tube,” he said.

Com­pli­cat­ing the con­cerns, the tech­nol­ogy is far from per­fect and can misiden­tify peo­ple. The mis­takes are es­pe­cially stark when the tech­nol­ogy is iden­ti­fy­ing peo­ple of color.


Shankar Narayan, leg­isla­tive di­rec­tor of the ACLU of Wash­ing­ton, left, speaks at a news con­fer­ence out­side Ama­zon head­quar­ters in Seat­tle Mon­day.

Brad Smith

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.