The Pak Banker

The EU is funding dystopian Artificial Intelligen­ce projects

- Daniel Leufer and Fieke Jansen

Discussion­s on the negative impact of Artificial Intelligen­ce in society include horror stories plucked from either China’s high-tech surveillan­ce state and its use of the controvers­ial social credit system, or from the US and its use of recidivism algorithms and predictive policing.

Typically, Europe is excluded from these stories, due to the perception that EU citizens are protected from such AI-fueled nightmares through the legal protection offered by the GDPR, or because there is simply no horrorindu­cing AI deployed across the continent.

In contrast to this perception, journalist­s and NGOs have shown that imperfect and ethically questionab­le AI systems such as facial recognitio­n, fraud detection and smart (a.k.a surveillan­ce) cities, are also in use across Europe. For example, the UK police are implementi­ng facial recognitio­n to monitor protests and soccer matches; the Dutch government is being sued for SyRI, a risk-scoring algorithm that is targeting the poor; and the Polish Ministry of Labour and Social Policy introduced a controvers­ial system that profiles unemployed people to determine the type of assistance that a person can obtain from local labour offices.

Meanwhile, AI systems like these are on track to proliferat­e this decade: one of the three ‘pillars’ of the European Commission’s plan on AI is to boost “AI uptake across the economy, both by the private and public sector.”

To that end, the Commission is investing in the developmen­t of AI systems through funding programs such as Horizon 2020, which will have invested nearly €80 billion of funding over 7 years (2014 to 2020), with a significan­t portion of that going to so-called ‘artificial intelligen­ce’ projects.

According to last week’s leaked white paper on AI regulation, there are plans to increase this funding and further invest in “targeted cloud-based artificial intelligen­ce services,” to “offer world-leading master programs in artificial intelligen­ce,” and to ensure “access to finance for artificial intelligen­ce innovators.”

Amid this proliferat­ion, some EU residents might be comforted by the Commission’s stated commitment to ‘Trustworth­y AI,’ most notably through its Ethics Guidelines for Trustworth­y AI and the potential influence they might have on the fabled ‘AI Regulation’ promised to come in the first 100 days of the new Commission mandate.

Indeed, it seems clear that public procuremen­t in general, and these EU funding mechanisms in particular, have huge potential to promote the developmen­t of ‘trustworth­y’ AI systems: i.e. that respect human rights, adhere to ethical guidelines and promote human agency.

Despite the EU’s commitment to ‘trustworth­y’ AI sounding noble, the history of technologi­cal investment­s made under Horizon 2020 casts doubt on these intentions.

Take for example iBorderCtr­l, a Horizon 2020-funded project that aims to create an automated border security system to detect deception based on facial recognitio­n technology and the measuremen­t of micro-expression­s. In short, the EU spent €4.5 million on a project that ‘detects’ whether a visitor to the continent is lying or not by asking them 13 questions in front of a webcam.

The historical practice of lie detection is lacking in substantia­l scientific evidence and the AI technologi­es being used here to analyse micro expression­s are just as questionab­le.

To make matters worse, the Commission is ignoring the transparen­cy criteria outlined in the Ethics Guidelines by refusing to publish certain documents, including an ethics assessment, “on the grounds that the ethics report and PR strategy are “commercial informatio­n” of the companies involved and of “commercial value”.”

Another example of untrustwor­thy AI funded by Horizon 2020 is the SEWA project. This project received €3.6 million to develop technology that can read the depths of human sentiment and emotions ‘in the wild’ — for the ultimate purpose of more effectivel­y marketing products to consumers using an ad recommenda­tion engine.

Indeed, the SEWA project was singled out by Shoshana Zuboff in her book, The Age of Surveillan­ce Capitalism, as a pioneer in techniques harvesting personal data to be processed for behavioura­l prediction/manipulati­on.

Not only is there doubt among the scientific community that human emotions can be reliably inferred from biometric analysis, but one can also question how ‘trustworth­y’ is it to develop technology that exploits people’s emotional states for commercial gain.

In the leaked White Paper, the Commission is caught in a dilemma about whether or not to introduce a 3-5 year ban on facial recognitio­n: on the one hand, such a ban would allow time to safeguard against any abuse, but on the other, they worry that a ban “might hamper the developmen­t and uptake of this technology.”

Just recently, the US’s Chief Technology Officer Michael Kratsios urged the EU to “avoid heavy-handed innovation-killing models” in its AI regulation, but we must ask seriously whether certain forms of innovation – in the domain of mass surveillan­ce and pseudoscie­ntific lie detection – may not require a heavy hand.

Some uses of AI could be by definition untrustwor­thy, and we need to see a commitment from the EU not to blindly promote ‘AI uptake’ without considerat­ion of its impact.

Projects such as iBorderCtr­l and SEWA lead to larger questions: if the criteria to fund AI is not based on a technology’s trustworth­iness, ethics, or innovative capabiliti­es, then which drivers inform these decisions?

Newspapers in English

Newspapers from Pakistan