Who’s Watch­ing?

Fa­cial Recog­ni­tion in CCTV: the Im­pli­ca­tions of Sur­veil­lance

The McGill Daily - - Sci + Tech - Nelly Wat | The Mcgill Daily Nabeela Jivraj

The air is a lit­tle frigid. You’re ex­pect­ing an im­por­tant call. You’re on your way home from class when you feel your phone vi­brat­ing in your pocket. With your parka, hat, and boots, you fum­ble to check your phone with­out pulling off your mit­tens. For­tu­nately, you only have to look at the screen to an­swer the call – your phone can be un­locked with both your fin­ger­prints, as well as your face.

Fa­cial recog­ni­tion tech­nolo­gies of­fer a newer, more per­sonal type of se­cu­rity, through which ar­ti­fi­cial in­tel­li­gence and high-pre­ci­sion cam­eras en­able in­stan­ta­neous iden­ti­fi­ca­tion of users. These tech­nolo­gies are touted for the sup­posed prom­ise of in­creased se­cu­rity. This means that na­tional se­cu­rity forces can in­crease safety by re­spond­ing faster to vi­o­lent crime, and by be­ing quicker with in­ves­ti­ga­tions. For in­di­vid­ual users, fa­cial recog­ni­tion tech­nol­ogy can pur­port­edly of­fer them con­trol over their own per­sonal in­for­ma­tion, whether that be as a means to lock ac­cess to your phone, bank, or other per­sonal af­fairs.

Though China is the first na­tion to fully im­ple­ment sur­veil­lance with this tech­nol­ogy, Aus­tralia, In­dia, and the United King­dom have joined in tri­al­ing the tech­nol­ogy over the past year. These na­tional se­cu­rity sys­tems rely on na­tional data­bases of civil­ian pro­files to iden­tify peo­ple. More re­cently, fa­cial recog­ni­tion CCTV (closed- cir­cuit tele­vi­sion sys­tems) has been added to the suite of mod­ern video an­a­lyt­ics for sur­veil­lance. In ad­di­tion to be­ing able to iden­tify ob­jects, an­i­mals, and to log how fast things are mov­ing, na­tional se­cu­rity sys­tems us­ing fa­cial recog­ni­tion CCTV are able to in­stantly iden­tify who is in the frame. This could mean a de­creased reliance on wit­nesses or in-per­son in­ves­ti­ga­tions. This tech­nol­ogy al­lows for in­ves­ti­ga­tions to go en­tirely dig­i­tal and en­ables po­lice to ar­rive on scene to carry out ar­rests min­utes after crimes are com­mit­ted.

Like any other Tues­day night, red and blue lights bounce off the snow at the in­ter­sec­tion. You hang up the phone as you turn the cor­ner, and are im­me­di­ately stopped by an of­fi­cer. “Are you so-andso?” they ask. “We saw you on cam­era.” You shake your head, no. They ask you to show ID. You try to re­mem­ber if you took your ID to school to­day.

In the mid 19th-cen­tury, philoso­pher Ben­tham pro­posed “the panop­ti­con,” an ar­chi­tec­tural prison de­sign, which of­fered com­plete con­trol of those be­ing ob­served via in­ter­nal­ized co­er­cion. Be­cause peo­ple in the panop­ti­con are al­ways be­ing watched, they are con­stantly aware of be­ing ob­served, and are, there­fore, un­der con­trol.

With lit­tle reg­u­la­tion or pol­icy sur­round­ing fa­cial tech­nol­ogy, au­thor­i­tar­ian sur­veil­lance is en­tirely pos­si­ble, and al­ready hap­pen­ing. The use of fa­cial recog­ni­tion tech­nol­ogy for sur­veil­lance is crit­i­cized on many fronts – when it works well, it poses a risk to civil­ian free­dom and pri­vacy, and when it doesn’t work, it makes in­no­cent peo­ple vul­ner­a­ble. Big Brother Watch, a non-profit civil lib­er­ties or­ga­ni­za­tion which cam­paigns against the rise of state sur­veil­lance, pro­duced a re­port which es­ti­mated that fa­cial recog­ni­tion tech­nol­ogy had a high rate of false pos­i­tives, as well as false neg­a­tives. While false pos­i­tives oc­cur when the tech­nol­ogy iden­ti­fies some­one in­cor­rectly, false neg­a­tives are the fail­ure to cor­rectly iden­tify some­one who is in a na­tional fa­cial recog­ni­tion database. A cen­tral point of the re­port is that well-work­ing, or per­fected fa­cial recog­ni­tion tech­nol­ogy, would es­sen­tially turn civil­ians into “walk­ing ID cards.” Con­versely, the use of sur­veil­lance tech­nol­ogy to po­lice con­certs, fes­ti­vals, and car­ni­vals in both the UK and China have falsely iden­ti­fied the pres­ence of na­tional sus­pects to po­lice over 90 per cent of the time. The re­port also high­lights how fa­cial recog­ni­tion tech­nol­ogy is dis­pro­por­tion­ately ac­cu­rate when it comes to mi­nor­ity groups: it fre­quently in­cor­rectly iden­ti­fies women of mi­nor­ity eth­nic groups in the United States. This is a ma­jor con­cern, as racial prej­u­dice in po­lice sys­tems al­ready dis­pro­por­tion­ately af­fects mi­nori­ties. If tech­nolo­gies pose the risk of in­creas­ing this dis­par­ity, the “mer­its” of these tech­nolo­gies should truly be called into ques­tion. The risk of racial prej­u­dice in Ai-based tech­nolo­gies is a re­cur­ring con­cern – a piece ear­lier this se­mes­ter, ti­tled Is Airacist? ex­am­ined the fal­li­bil­ity of AI and its con­sis­tent is­sues in terms of racial bias.

As the price of these tech­nolo­gies con­tin­ues to de­crease, mak­ing them more ac­ces­si­ble for other na­tions to fol­low suit, we have to ask whether we are ad­e­quately equipped for the reper­cus­sions of in­sti­tu­tion­al­iz­ing this tech­nol­ogy, and giv­ing in to in­creas­ingly au­thor­i­tar­ian sur­veil­lance. With in­stan­ta­neous iden­ti­fi­ca­tion, ad­vances in sur­veil­lance move us closer to a mod­ern, and vividly real, it­er­a­tion of the panop­ti­con. We con­stantly have to ask – who is watch­ing us? Should they be?

This tech­nol­ogy al­lows for in­ves­ti­ga­tions to go en­tirely dig­i­tal.

With lit­tle reg­u­la­tion or pol­icy sur­round­ing fa­cial tech­nol­ogy, au­thor­i­tar­ian sur­veil­lance is en­tirely pos­si­ble, and al­ready hap­pen­ing.

Per­fected fa­cial recog­ni­tion tech­nol­ogy would es­sen­tially turn civil­ians into “walk­ing ID cards.”

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.