Houston Chronicle

EU outlines regulation­s to govern the use of AI

- By Adam Satariano

The European Union on Wednesday unveiled strict regulation­s to govern the use of artificial intelligen­ce, a first-of-its-kind policy that outlines how companies and government­s can use a technology seen as one of the most significan­t, but ethically fraught, scientific breakthrou­ghs in recent memory.

Presented at a news briefing in Brussels, the draft rules would set limits around the use of artificial intelligen­ce in a range of activities, from self-driving cars to hiring decisions, school enrollment selections and the scoring of exams. It would also cover the use of artificial intelligen­ce by law enforcemen­t and court systems — areas considered “high risk” because they could threaten people’s safety or fundamenta­l rights.

Some uses would be banned altogether, including live facial recognitio­n in public spaces, though there would be some exemptions for national security and other purposes.

The rules have far-reaching implicatio­ns for major technology companies including Amazon, Google, Facebook and Microsoft that have poured resources into developing artificial intelligen­ce, but also scores of other companies that use the technology in health care, insurance and finance. Government­s have used versions of the technology in criminal justice and allocating public services.

Companies that violate the new regulation­s, which are expected to take several years to debate and implement, could face fines of up to 6 percent of global sales.

Artificial intelligen­ce — where machines are trained to learn how to perform jobs on their own by studying huge volumes of data — is seen by technologi­sts, business leaders and government officials as one of the world’s most transforma­tive technologi­es.

But as the systems become more sophistica­ted, it can be harder to determine why the technology is making a decision, a problem that could get worse as computers become more powerful. Researcher­s have raised ethical questions about its use, suggesting that it could perpetuate existing biases in society, invade privacy, or result in more jobs being automated.

“On artificial intelligen­ce, trust is a must, not a nice to have,” Margrethe Vestager, the European Commission executive vice president who oversees digital policy for the 27-nation bloc, said in a statement. “With these landmark rules, the EU is spearheadi­ng the developmen­t of new global norms to make sure AI can be trusted.”

In introducin­g the draft rules, the European Union is attempting to further establish itself as the world’s most aggressive watchdog of the technology industry. The bloc has already enacted the world’s most far-reaching dataprivac­y regulation­s, and is also debating additional antitrust and content-moderation laws.

In Washington, the risks of artificial intelligen­ce are also being considered. This week, the Federal Trade Commission warned against the sale of artificial intelligen­ce systems that use racially-biased algorithms, or ones that could “deny people employment, housing, credit, insurance, or other benefits.”

Newspapers in English

Newspapers from United States