PC Probe: Who’s watching the AI?
As everyday decisions are increasingly being delegated to computer systems, Stewart Mitchell investigates who’s accountable for the software’s verdicts
As everyday decisions are increasingly delegated to intelligent software, we investigate who is accountable for the decisions made.
An insurance firm’s algorithm decides that you’re too much of a risk for health insurance; your credit limit is suddenly cut by a bank’s automated system; the judge refuses you bail because a computer decides you pose a serious flight risk. As artificial intelligence is increasingly used to replace human decision-making, an obvious question arises: who’s keeping an eye on the digital decision-makers?
According to AI specialists, the answer is nobody. Developers are building systems that make potentially life-changing decisions, but have no external oversight or standardisation. “It’s not simply that AI algorithms can make mistakes, but that the whole ecosystem is a closed book, with little understanding of how decisions that have real-world impacts on people are actually made,” said Sandra Wachter, a data ethics specialist at London’s Alan Turing Institute (ATI). “Judges could also use AI to
ABOVE Are AI systems making life-changing decisions without enough oversight?
decide whether people should be given parole and the chances of them re-offending, so really important decisions are being delegated to AI systems that are hard to scrutinise. We don’t know how they work and don’t have any safeguards in place to make sure the systems are accountable and fair, and this is a major issue.”
Wachter believes that UK data laws leave consumers disadvantaged because, although companies might be required to inform people that AI was used in the decision-making process, there’s no way of reviewing that process.
Wachter and her colleagues at the ATI and Oxford Internet Institute have written a report titled “Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation (GDPR)”, which highlights the lack of transparency. “If I applied for a credit card and was declined, I should have the right to know how the algorithms made its decision – so what kind of data was used, what was the criteria, what were the weightings and how was the decision approached?” she said.
“We cannot say there’s a right to explanation or a right to be informed under the proposed GDPR legislation. The problem is we don’t have standards so we don’t have certain techniques that are transparent and fair.”
Removing the human factor
AI is in widespread use, and it’s becoming omnipotent. A report from consultancy firm Accenture predicts that AI may significantly contribute to economic growth in the UK, adding $814 billion to the economy by 2035. “AI is poised to transform business in ways we’ve not seen since the impact of computer technology in the late 20th century,” said Paul Daugherty, CTO at Accenture.
It’s little wonder that businesses are keen to roll out AI quickly, but is accountability being sacrificed in the rush? One obstacle to transparency is that companies building AI systems have no incentive to open their software to inspection, preferring to keep their intellectual property to themselves. The IP is valuable, but experts believe that there’s too much scope for abuse – and think a regulatory body or inspectorate is essential to police the technology.
Yet even if companies could be convinced – or forced – to break the seals on their software, there’s no guarantee an inspectorate would be able to make sense of what they see. “Some systems are difficult to understand, and certainly deep-learning systems are quite opaque in the way they make decisions, so achieving transparency technically as well as giving commercial confidentiality may well be difficult,” said Professor Alan Winfield from Bristol Robotics Lab.
“If you put something like an aircraft data recorder in systems,whether they’re driverless cars or software AI systems – you could record exactly what happened at each moment in time and with each input to the system and the outputs,” he said.
Simply recording the process may not provide sufficient transparency, however, with concerns that, as machine learning refines the systems, it becomes unclear which factors are affecting decisions. AI could change the weighting given to an element in the decision-making process based on previous results – so the same inputs could have a different output from one day to the next.
Such a situation makes transparency more difficult, but platform developers could still include checks and balances – a monitoring tool that can forensically inspect what’s happening at a given time, say. “If it’s a learning system that’s continuously evolving, then the system may – and this is hypothetical – make a different decision today than it actually made yesterday,” Winfield said. “To overcome that, you need to take a snapshot of the system periodically – these are deep technical problems, but there’s no doubt we need to try and solve them.
“Even with deep learning, it’s essentially a large neural network trained with large datasets. I don’t believe that it’s impossible to build systems that can allow us to get an explanation for why they’ve made a decision.”
Fear of regulation
Academics may be push for greater transparency, but the industry isn’t so keen. Not surprisingly, industry players are seeking a soft approach that would allow the sector to grow without the hassle of regulated consumer protection.
“One thing we must not do is put too much red tape around this at the wrong time and stop things developing,” Dr Rob Buckingham, director of OC Robotics, told a Commons committee which, in late 2016, published a report on robotics and artificial intelligence.
“One of the key points is to make sure that we’re doing testing in the UK transparently and bringing the industry here so that we understand what’s going on, and that we start to apply the regulation appropriately when we have more information about what the issues are.”
Trade group TechUK also that warned “that overregulation or legislation of robotics and AI at this stage of its development, risks stalling or even stifling innovation”.
However, there’s now momentum from non-profit groups for a set of standards or regulations that could work in practice. “The technology needs to be sufficiently transparent to give answers as to how it works when required and the regulatory structures need to be in place so that, if necessary, companies and suppliers are compelled to give those answers,” said Winfield.
Working with the IEEE, Winfield is proposing an auditing programme in which trusted bodies would be able to examine the internal workings of AI systems, and report top-level results that don’t reveal commercial secrets. “In a sense, you can get around the problem of commercial confidentiality by setting up an independent inspectorate, and the deal is that the inspectorate conducts its work within strict boundaries of confidentiality,” he said, adding that there are plenty of examples.
“In existing safety-critical systems like railway software, there’s an inspectorate that’s allowed access to commercially sensitive material because it’s well understood that that is a confidential exchange and only the abstract or high-level details of what went wrong are made public, but not the intellectual property behind the decision.”
Systems could even be issued with a certificate for meeting criteria for transparency before being rolled out. “You could have a body that would issue certifications,” said the ATI’s Wachter. “So you could have a certification system in place before an algorithm is deployed, with a seal that says it’s fair and accountable.”
The technology needs to be sufficiently transparent to give answers as to how it works when required