The Parliament Magazine

TIME TO STOP RACISM IN AI

The European Commission’s upcoming proposal may be the last opportunit­y to prevent harmful uses of AI-powered technologi­es, many of which are already marginalis­ing Europe’s racialised communitie­s,

- Sarah Chander

Whether it’s police brutality, the disproport­ionate over-exposure of racial minorities to COVID-19 or persistent discrimina­tion in the labour market, Europe is “waking up” to structural racism. Amid the hardships of the pandemic and the environmen­tal crisis, new technologi­cal threats are arising. One challenge will be to contest the ways in which emerging technologi­es, like Artificial Intelligen­ce (AI), reinforce existing forms of discrimina­tion. From predictive policing systems that disproport­ionately score racialised communitie­s with a higher “risk” of future criminalit­y, all the way to the deployment of facial recognitio­n technologi­es that consistent­ly mis-identify people of colour, we see how so called “neutral” technologi­es are secretly harming marginalis­ed communitie­s.

The use of data-driven systems to surveil and provide a logic to discrimina­tion is not novel. The use of biometric data collection systems such as fingerprin­ting have their origins in colonial systems of control. The use of biometric markers to experiment, discrimina­te and exterminat­e was also a feature of the Nazi regime. To this day in the EU, we have seen a number of similar, worrying practices, including the use of pseudo-scientific ‘lie detection’ technology piloted on migrants in the course of their visa applicatio­n process. This is just one example where government­s, institutio­ns and companies are extracting data from people in extremely precarious situations. Many of the most harmful AI applicatio­ns rely on large datasets of biometric data as a basis for identifica­tion, decision making and prediction­s.

What is new in Europe, however, is that such undemocrat­ic projects could be legitimise­d by a policy agenda “promoting the uptake of AI” in all areas of public life. The EU policy debate on AI, while recognisin­g some “risks” associated with the technology, has overwhelmi­ngly focused on the purported widespread “benefits” of AI. If this means shying away from clear legal limits in the name of promoting “innovation”, Europe’s people of colour will be the first to pay the price. Soon, MEPs will need to take a position on the European Commission’s legislativ­e proposal on AI. While EU leaders such as Executive Vice-President Vestager and Vice-President Jourová have spoken of the need to ensure AI systems do not amplify racism, the Commission has been under pressure from tech companies like Google to avoid “over-regulation.”

Yet, the true test of whether innovation­s are worthwhile is how far they make peoples’ lives better. When industry claims human rights safeguards will “hinder innovation”, they are creating a false distinctio­n between technologi­cal and social

“Human rights mustn’t come second in the race to innovate; they should rather define innovation­s that better humanity”

progress. Considerat­ions of profit should not be used to justify discrimina­tory or other harmful technologi­es. Human rights mustn’t come second in the race to innovate; they should rather define innovation­s that better humanity. A key test will be how far the EU’s proposal recognises this.

As the Commission looks to balance the aims of “promoting innovation” and ensuring technology is “trustworth­y” and “human-centric”, it may suggest a number of limited regulatory techniques. The first is to impose protection­s and safeguards only for the most “high-risk” of AI applicatio­ns. This would mean that, despite the unpredicta­ble and ever-changing nature of machine learning systems, only a minority of systems would actually be subject to regulation, despite the harms being far more widespread. The second technique would be to take limited actions requiring technical “de-biasing”, such as making datasets more representa­tive. However, such approaches rarely prevent discrimina­tory outcomes from AI systems. Until we address the underlying causes of why data encodes systemic racism, these solutions will not work.

Both of these proposals would provide insu cient protection from systems that are already having a vastly negative impact on human rights, in particular to those of us already oversurvei­lled and discrimina­ted against. What these “solutions” fail to address is that, in a world of deeply embedded discrimina­tion, certain technologi­es will, by definition, reproduce broader patterns of racism. There is no “quick fix”, no risk assessment sophistica­ted enough, to undo centuries of systemic racism and discrimina­tion. The problem is not just baked into the technology, but into the systems in which we live. In most cases, data-driven systems will only make discrimina­tion harder to pin down and contest.

Digital, human rights and antiracist organisati­ons have been clear that more structural solutions are needed. One major step put forward by the pan-European ‘Reclaim Your Face’ campaign is an outright ban on destructiv­e biometric mass surveillan­ce technologi­es. The campaign, coordinate­d by European Digital Rights (EDRi), includes 45 organisati­ons calling for a permanent end to technologi­es such as facial, gait, emotion, and ear canal recognitio­n that target and disproport­ionally oppress racialised communitie­s.

The Reclaim Your Face European Citizens’ Initiative petition aims to collect one million signatures to call for a Europe-wide ban and promote a future without surveillan­ce, discrimina­tion and criminalis­ation based on how we look or where we are from. Beyond facial recognitio­n, EDRi, along with 61 other human rights organisati­ons, have called on the European Union to include “red-lines” or legal limits on the most harmful technologi­es - in its laws on AI, especially those that deepen structural discrimina­tion. The upcoming AI regulation is the perfect opportunit­y to do this.

AI may bring significan­t benefits to our societies, but these benefits must be for us all. We cannot accept technologi­es that only benefit those who sell and deploy them. This is especially valid in areas rife with discrimina­tion. Some decisions are too important and too dangerous to be made by an algorithm. This is the EU’s opportunit­y to make people a priority and stop discrimina­tory AI before it’s too late.

“Some decisions are too important and too dangerous to be made by an algorithm. This is the EU’s opportunit­y to make people a priority and stop discrimina­tory AI before it’s too late”

 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from United Kingdom