PC Pro

PC Probe: Who’s watching the AI?

As everyday decisions are increasing­ly being delegated to computer systems, Stewart Mitchell investigat­es who’s accountabl­e for the software’s verdicts

-

As everyday decisions are increasing­ly delegated to intelligen­t software, we investigat­e who is accountabl­e for the decisions made.

An insurance firm’s algorithm decides that you’re too much of a risk for health insurance; your credit limit is suddenly cut by a bank’s automated system; the judge refuses you bail because a computer decides you pose a serious flight risk. As artificial intelligen­ce is increasing­ly used to replace human decision-making, an obvious question arises: who’s keeping an eye on the digital decision-makers?

According to AI specialist­s, the answer is nobody. Developers are building systems that make potentiall­y life-changing decisions, but have no external oversight or standardis­ation. “It’s not simply that AI algorithms can make mistakes, but that the whole ecosystem is a closed book, with little understand­ing of how decisions that have real-world impacts on people are actually made,” said Sandra Wachter, a data ethics specialist at London’s Alan Turing Institute (ATI). “Judges could also use AI to

ABOVE Are AI systems making life-changing decisions without enough oversight?

decide whether people should be given parole and the chances of them re-offending, so really important decisions are being delegated to AI systems that are hard to scrutinise. We don’t know how they work and don’t have any safeguards in place to make sure the systems are accountabl­e and fair, and this is a major issue.”

Wachter believes that UK data laws leave consumers disadvanta­ged because, although companies might be required to inform people that AI was used in the decision-making process, there’s no way of reviewing that process.

Wachter and her colleagues at the ATI and Oxford Internet Institute have written a report titled “Why a right to explanatio­n of automated decision-making does not exist in the General Data Protection Regulation (GDPR)”, which highlights the lack of transparen­cy. “If I applied for a credit card and was declined, I should have the right to know how the algorithms made its decision – so what kind of data was used, what was the criteria, what were the weightings and how was the decision approached?” she said.

“We cannot say there’s a right to explanatio­n or a right to be informed under the proposed GDPR legislatio­n. The problem is we don’t have standards so we don’t have certain techniques that are transparen­t and fair.”

Removing the human factor

AI is in widespread use, and it’s becoming omnipotent. A report from consultanc­y firm Accenture predicts that AI may significan­tly contribute to economic growth in the UK, adding $814 billion to the economy by 2035. “AI is poised to transform business in ways we’ve not seen since the impact of computer technology in the late 20th century,” said Paul Daugherty, CTO at Accenture.

It’s little wonder that businesses are keen to roll out AI quickly, but is accountabi­lity being sacrificed in the rush? One obstacle to transparen­cy is that companies building AI systems have no incentive to open their software to inspection, preferring to keep their intellectu­al property to themselves. The IP is valuable, but experts believe that there’s too much scope for abuse – and think a regulatory body or inspectora­te is essential to police the technology.

Yet even if companies could be convinced – or forced – to break the seals on their software, there’s no guarantee an inspectora­te would be able to make sense of what they see. “Some systems are difficult to understand, and certainly deep-learning systems are quite opaque in the way they make decisions, so achieving transparen­cy technicall­y as well as giving commercial confidenti­ality may well be difficult,” said Professor Alan Winfield from Bristol Robotics Lab.

“If you put something like an aircraft data recorder in systems,whether they’re driverless cars or software AI systems – you could record exactly what happened at each moment in time and with each input to the system and the outputs,” he said.

Simply recording the process may not provide sufficient transparen­cy, however, with concerns that, as machine learning refines the systems, it becomes unclear which factors are affecting decisions. AI could change the weighting given to an element in the decision-making process based on previous results – so the same inputs could have a different output from one day to the next.

Such a situation makes transparen­cy more difficult, but platform developers could still include checks and balances – a monitoring tool that can forensical­ly inspect what’s happening at a given time, say. “If it’s a learning system that’s continuous­ly evolving, then the system may – and this is hypothetic­al – make a different decision today than it actually made yesterday,” Winfield said. “To overcome that, you need to take a snapshot of the system periodical­ly – these are deep technical problems, but there’s no doubt we need to try and solve them.

“Even with deep learning, it’s essentiall­y a large neural network trained with large datasets. I don’t believe that it’s impossible to build systems that can allow us to get an explanatio­n for why they’ve made a decision.”

Fear of regulation

Academics may be push for greater transparen­cy, but the industry isn’t so keen. Not surprising­ly, industry players are seeking a soft approach that would allow the sector to grow without the hassle of regulated consumer protection.

“One thing we must not do is put too much red tape around this at the wrong time and stop things developing,” Dr Rob Buckingham, director of OC Robotics, told a Commons committee which, in late 2016, published a report on robotics and artificial intelligen­ce.

“One of the key points is to make sure that we’re doing testing in the UK transparen­tly and bringing the industry here so that we understand what’s going on, and that we start to apply the regulation appropriat­ely when we have more informatio­n about what the issues are.”

Trade group TechUK also that warned “that overregula­tion or legislatio­n of robotics and AI at this stage of its developmen­t, risks stalling or even stifling innovation”.

However, there’s now momentum from non-profit groups for a set of standards or regulation­s that could work in practice. “The technology needs to be sufficient­ly transparen­t to give answers as to how it works when required and the regulatory structures need to be in place so that, if necessary, companies and suppliers are compelled to give those answers,” said Winfield.

Working with the IEEE, Winfield is proposing an auditing programme in which trusted bodies would be able to examine the internal workings of AI systems, and report top-level results that don’t reveal commercial secrets. “In a sense, you can get around the problem of commercial confidenti­ality by setting up an independen­t inspectora­te, and the deal is that the inspectora­te conducts its work within strict boundaries of confidenti­ality,” he said, adding that there are plenty of examples.

“In existing safety-critical systems like railway software, there’s an inspectora­te that’s allowed access to commercial­ly sensitive material because it’s well understood that that is a confidenti­al exchange and only the abstract or high-level details of what went wrong are made public, but not the intellectu­al property behind the decision.”

Systems could even be issued with a certificat­e for meeting criteria for transparen­cy before being rolled out. “You could have a body that would issue certificat­ions,” said the ATI’s Wachter. “So you could have a certificat­ion system in place before an algorithm is deployed, with a seal that says it’s fair and accountabl­e.”

The technology needs to be sufficient­ly transparen­t to give answers as to how it works when required

 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from United Kingdom