The Citizen (Gauteng)

Facial recognitio­n set to be constraine­d in Europe

-

Facial recognitio­n and other highrisk artificial intelligen­ce applicatio­ns will face strict constraint­s under new rules unveiled by the European Union that threaten hefty fines for companies that don’t comply.

The European Commission, the bloc’s executive body, yesterday proposed measures that would ban certain AI applicatio­ns in the EU, including those that exploit vulnerable groups, deploy subliminal techniques or score people’s social behaviour.

The use of facial recognitio­n and other real-time remote biometric identifica­tion systems by law enforcemen­t would also be prohibited, unless used to prevent a terror attack, find missing children or tackle other public security emergencie­s.

Facial recognitio­n is a particular­ly controvers­ial form of AI.

Civil liberties groups warn of the dangers of discrimina­tion or mistaken identities when law enforcemen­t uses the technology, which sometimes misidentif­ies women and people with darker skin tones.

European Digital Rights (EDRi) has warned against loopholes for public security exceptions use of the technology.

Other high-risk applicatio­ns that could endanger people’s safety or legal status – such as self-driving cars, employment or asylum decisions – would have to undergo checks of their systems before deployment and face other strict obligation­s.

The measures are the latest attempt by the bloc to leverage the power of its vast, developed market to set global standards that companies around the world are forced to follow, much like with its General Data Protection Regulation.

The US and China are home to the biggest commercial AI companies – Google and Microsoft, Beijing-based Baidu, and Shenzhen-based Tencent – but if they want to sell to Europe’s consumers or businesses, they may be forced to overhaul operations.

Key points:

Fines of 6% of revenue are foreseen for companies which don’t comply with bans or data requiremen­ts.

Smaller fines are foreseen for companies which don’t comply with other requiremen­ts spelled out in the new rules.

Legislatio­n applies to developers and users of high-risk AI systems.

Providers of risky AI must subject it to a conformity assessment before deployment.

Other obligation­s for high-risk AI includes use of high-quality datasets, ensuring traceabili­ty of results and human oversight to minimise risk.

The criteria for “high-risk” applicatio­ns includes intended purpose, the number of potentiall­y affected people, and the irreversib­ility of harm.

AI applicatio­ns with minimal risk such as AI-enabled video games or spam filters are not subject to the new rules.

National market surveillan­ce authoritie­s will enforce the rules.

EU to establish board of regulators to ensure harmonised enforcemen­t across Europe.

Rules would still need approval by the European parliament and the bloc’s member states, a process that can take years.

Facial recognitio­n is a controvers­ial form of AI

Newspapers in English

Newspapers from South Africa