The future of AI
Will a new agreement keep AI in check?
Artificial intelligence is creeping into every aspect of our lives – and that terrifies plenty of people. Even setting aside fears of technological apocalypse brought about by machine learning, AI could further embed existing problems in society. AI-based systems are already working on medical diagnoses and bail decisions – even though they mirror biases and assumptions. For example, a 2016 study by ProPublica in the US revealed racial bias in an algorithm used to determine the risk of offenders committing crimes in the future.
Making AI behave better than humans isn’t easy, but a group of tech rights organisations has taken the first step by laying out a detailed set of warnings and advice for developers and engineers to consider when building systems. It’s called the Toronto Declaration for Machine Learning.
What does the Toronto Declaration say?
The declaration asks anyone developing or using AI to consider human rights, and make an effort to ensure their algorithms and systems are balanced and fair. It calls on governments and private companies to identify potential pitfalls and prevent them, and to hold responsible anyone who causes harm via AI.
How can governments monitor AI?
This is a frequent criticism of calls for ethics in AI: it’s too difficult for nontech-savvy politicians and their staff to understand complicated algorithms and systems, so we pretend they’re black boxes where data goes in and answers come out. This is the wrong approach, the declaration stresses, and it calls for any government using machine learning technologies to understand how systems work before rolling them out. It also calls for regular impact assessments during use, and for assurance that those affected have real recourse. In other words, the declaration asks governments to do their jobs and run consultations, perform audits and enforce regulations – as they already do for non-AI related topics. And if the government can’t explain how AI works or why it makes certain decisions, it simply shouldn’t use it.
Can a document make AI and its developers behave?
Perhaps the key question. So far it’s been signed by Amnesty International, Access Now and Human Rights Watch, as well as Wikimedia, the organisation behind Wikipedia. But there’s nothing to enforce compliance. That said, such a declaration is a starting point, offering guidance to firms who want to do better and governments that are looking to legislate.
So our best bet is just asking nicely for AI to behave?
The next time someone claims algorithms are fair decision-making machines that act with unbiased precision, forward them the declaration. It lays out exactly how to go about auditing and improving algorithms and AI, offering practical solutions. The aim is to spark and shape a discussion, and hopefully push developers towards more ethical, considered AI. Read the declaration in full here: pcpro.link/287dec.