EU seeks to outlaw live facial scanning
Draft proposal part of AI regulations
Risky uses of artificial intelligence that threaten people’s safety or rights such as live facial scanning should be banned or tightly controlled, European Union officials said Wednesday as they outlined an ambitious package of proposed regulations to rein in the rapidly expanding technology.
The draft regulations from the EU’s executive commission include rules for applications deemed high risk such as AI systems to filter out school, job or loan applicants. They would also ban artificial intelligence outright in a few cases considered too risky, such as government “social scoring” systems that judge people based on their behavior.
The proposals are the 27-nation bloc’s latest move to maintain its role as the world’s standardbearer for technology regulation, as it tries to keep up with the world’s two big tech superpowers, the U.S. and China. EU officials say they are taking a four-level “riskbased
approach” that seeks to balance important rights such as privacy against the need to encourage innovation.
“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the European Commission’s executive vice president for the digital age, said in a statement. “By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the EU remains competitive along the way.”
To be sure, the draft rules have a long way to go before they take effect. They need to be reviewed by the European Parliament and the European Council and could be amended in a process that could take several years, though officials declined to give a specific time frame.
Previous EU tech regulation efforts have been far reaching and influential, earning it a reputation as a pioneer. Vestager, also the bloc’s competition chief, filed aggressive antitrust challenges against Silicon Valley giants like Google years before such action became fashionable. The EU was also early to the data privacy battle with stringent rules known as General Data Protection Regulation, or GDPR, that became the de facto global standard.
However, results have been mixed: Google still retains its online dominance and EU privacy cases against global tech companies are backed up. Officials are also working on updating the EU’s digital rulebook to protect internet users from harmful material or rogue traders.
Under the AI proposals, unacceptable uses would also include manipulating behavior, exploiting children’s vulnerabilities or using subliminal techniques.
“It can be a case where a toy uses voice systems to manipulate a child into doing something dangerous,” Vestager told a media briefing. “Such uses have no place in Europe and therefore we propose to ban them.”
The proposals include a prohibition in principle on controversial “remote biometric identification,” such as the use of live facial recognition to pick people out of crowds in real time, because “there is no room for mass surveillance in our society,” Vestager said.
There will, however, be an exception for narrowly defined law enforcement purposes such as searching for a missing child or a wanted person or preventing a terror attack. But some EU lawmakers and digital rights groups want the carve-out removed over fears it could be used by authorities to justify widespread future use of the technology, which they say is intrusive and inaccurate.
Biometric and mass surveillance technology “in our public spaces undermines our freedom and threatens our open societies,” said Patrick Breyer, an EU Pirate party lawmaker. “We cannot allow the discrimination of certain groups of people and the false incrimination of countless individuals by these technologies”
Other AI applications are considered high risk because they “interfere with important aspects of our lives,” Vestager said, including criminal courts, law enforcement, critical infrastructure such as transportation — think software for self-driving cars — and management of migration, asylum and border control. But their use is allowed provided operators follow rules including using high quality data to minimize discrimination and having a human in charge.
Herbert Swaniker, a technology lawyer at law firm Clifford Chance, compared the proposals to GDPR, which affect companies worldwide.