EU PROPOSES RULES FOR HIGH-RISK ARTIFICIAL INTELLIGENCE USES
European Union officials unveiled proposals for reining in high-risk uses of artificial intelligence such as live facial scanning that could threaten people’s safety or rights.
The draft regulations from the EU’S executive commission include rules on the use of the rapidly expanding technology in systems that filter out school, job or loan applicants. They also would ban artificial intelligence outright in a few cases considered too risky, such as “social scoring” systems that judge people based on their behavior and physical traits.
The ambitious proposals are the 27-nation bloc’s latest move to maintain its role as the world’s standard-bearer for technology regulation,
putting it ahead of the world’s two big tech superpowers, the U.S. and China. EU officials say they are taking a “risk-based approach” as they try to balance the need to protect rights such as data privacy against the need to encourage innovation.
“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the European Commission’s executive vice president for the digital age, said in a statement. “By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the EU remains competitive along the way.”
The proposals also include a prohibition in principle on controversial “remote biometric identification,” such as the use of live facial recognition to pick people out of crowds in real time, because “there is no room for mass surveillance in our society,” Vestager said in a media briefing.
There will, however, be an exception only for narrowly defined law enforcement purposes such as searching for a missing child or a wanted person or preventing a terror attack or threat.
But some EU lawmakers and digital rights group called for the carve-out to be removed over fears it could be used to justify widespread future use of the intrusive technology.
The draft regulations also cover AI applications that pose “limited risk,” such as chatbots which should be labeled so people know they are interacting with a machine. Most AI applications will be unaffected or covered by existing consumer protection rules.