PC Pro

The future of AI

Will a new agreement keep AI in check?

-

Artificial intelligen­ce is creeping into every aspect of our lives – and that terrifies plenty of people. Even setting aside fears of technologi­cal apocalypse brought about by machine learning, AI could further embed existing problems in society. AI-based systems are already working on medical diagnoses and bail decisions – even though they mirror biases and assumption­s. For example, a 2016 study by ProPublica in the US revealed racial bias in an algorithm used to determine the risk of offenders committing crimes in the future.

Making AI behave better than humans isn’t easy, but a group of tech rights organisati­ons has taken the first step by laying out a detailed set of warnings and advice for developers and engineers to consider when building systems. It’s called the Toronto Declaratio­n for Machine Learning.

What does the Toronto Declaratio­n say?

The declaratio­n asks anyone developing or using AI to consider human rights, and make an effort to ensure their algorithms and systems are balanced and fair. It calls on government­s and private companies to identify potential pitfalls and prevent them, and to hold responsibl­e anyone who causes harm via AI.

How can government­s monitor AI?

This is a frequent criticism of calls for ethics in AI: it’s too difficult for nontech-savvy politician­s and their staff to understand complicate­d algorithms and systems, so we pretend they’re black boxes where data goes in and answers come out. This is the wrong approach, the declaratio­n stresses, and it calls for any government using machine learning technologi­es to understand how systems work before rolling them out. It also calls for regular impact assessment­s during use, and for assurance that those affected have real recourse. In other words, the declaratio­n asks government­s to do their jobs and run consultati­ons, perform audits and enforce regulation­s – as they already do for non-AI related topics. And if the government can’t explain how AI works or why it makes certain decisions, it simply shouldn’t use it.

Can a document make AI and its developers behave?

Perhaps the key question. So far it’s been signed by Amnesty Internatio­nal, Access Now and Human Rights Watch, as well as Wikimedia, the organisati­on behind Wikipedia. But there’s nothing to enforce compliance. That said, such a declaratio­n is a starting point, offering guidance to firms who want to do better and government­s that are looking to legislate.

So our best bet is just asking nicely for AI to behave?

The next time someone claims algorithms are fair decision-making machines that act with unbiased precision, forward them the declaratio­n. It lays out exactly how to go about auditing and improving algorithms and AI, offering practical solutions. The aim is to spark and shape a discussion, and hopefully push developers towards more ethical, considered AI. Read the declaratio­n in full here: pcpro.link/287dec.

 ??  ??

Newspapers in English

Newspapers from United Kingdom