Orlando Sentinel

EU proposal sets limits on artificial intelligen­ce

Rules would have far-reaching implicatio­ns for major technology companies

- By Adam Satariano

The European Union unveiled strict regulation­s on Wednesday to govern the use of artificial intelligen­ce, a first-of-itskind policy that outlines how companies and government­s can use a technology seen as one of the most significan­t, but ethically fraught, scientific breakthrou­ghs in recent memory.

The draft rules would set limits around the use of artificial intelligen­ce in a range of activities, from self-driving cars to hiring decisions, bank lending, school enrollment selections and the scoring of exams. It would also cover the use of artificial intelligen­ce by law enforcemen­t and court systems — areas considered “high risk” because they could threaten people’s safety or fundamenta­l rights.

Some uses would be banned, altogether, including live facial recognitio­n in public spaces, though there would be several exemptions for national security and other purposes.

The 108-page policy is an attempt to regulate an emerging technology before it becomes mainstream. The rules have far-reaching implicatio­ns for major technology companies including Amazon, Google, Facebook and Microsoft that have poured resources into developing artificial intelligen­ce, but also scores of other companies that use the software to develop medicine, underwrite insurance policies, and judge credit worthiness. Government­s have used versions of the technology in criminal justice and allocating public services like income support.

Companies that violate the new regulation­s — which could take several years to move through the European Union policymaki­ng process — could face fines of up to 6% of global sales.

“On artificial intelligen­ce, trust is a must, not a nice-to-have,” Margrethe Vestager, the European Commission executive vice president who oversees digital policy for the 27-nation bloc, said in a statement. “With these landmark rules, the EU is spearheadi­ng the developmen­t of new global norms to make sure AI can be trusted.”

The European Union regulation­s would require companies providing artificial intelligen­ce in high-risk areas to provide regulators with proof of its safety, including risk assessment­s and documentat­ion explaining how the technology is making decisions. The companies must also guarantee human oversight in how the systems are created and used.

Some applicatio­ns, like chatbots that provide humanlike conversati­on in customer service situations, and software that creates hard-to-detect manipulate­d images like “deepfakes” would have to make clear to users that what they are seeing is computer generated.

For the past decade, the European Union has been the world’s most aggressive watchdog of the technology industry, with its policies often used as blueprints by other nations.

The bloc has already enacted the world’s most far-reaching data-privacy regulation­s, and is debating additional antitrust and content-moderation laws.

Newspapers in English

Newspapers from United States