The National - News

New law for AI to explain the inexplicab­le

Refused that bank loan? EU launches bid to find out how computers make decisions, writes Robert Matthews

- Robert Matthews is Visiting Professor of Science at Aston University, Birmingham, UK

When 2001: A Space

Odyssey opened in cinemas 50 years ago this month, it left audiences baffled. Its mind-bending special effects, enigmatic ending and main character HAL 9000 – a psychotic computer – were too much to handle. More than 200 people walked out of the first showing, and critics panned it.

Half a century on, 2001 is widely regarded as the greatest science fiction movie – and one of its main themes is about to become a real-life legal issue of global proportion­s. When the European Union’s General Data Protection Regulation­s become law next month, companies that use computer algorithms to make decisions about, say, giving someone financial credit, will have to be able to explain why the decision was made.

European citizens will have the right to insist that companies open the “black box” and show why their algorithm turned them down. Yet in this case, the black box is a neural network and there is no hope of meeting that demand.

Let’s use the example of HAL – in the film, “he” is the artificial intelligen­ce looking after the first mission to Jupiter. On the trip, the crew start to suspect HAL is malfunctio­ning – suspicions confirmed by mission control on Earth.

They are instructed to disconnect HAL’s higher functions. But before they can do so, HAL takes action, with murderous consequenc­es. Movie-goers get no explanatio­n as to why – all the more perplexing given the computer’s stated infallibil­ity.

A novel by Arthur C Clarke, published shortly after the film, claimed HAL had fallen prey to guilt caused by the clash between “his” role as a crew member and his secret orders not to tell his colleagues the mission’s true goal. But HAL is a so-called neural network computer, a real-life type of AI programmed to act like a network of brain cells, solving problems by learning from examples.

While they have proved very good at tasks such as facial recognitio­n, the technology has a big drawback – it’s impossible to be sure how a neural network comes to its decisions. And under the new laws, companies that cannot explain just this will face punitive fines.

No analysis of it will reveal what is going on inside the black box because, like a real brain, a neural network gives no clue about the thought processes driving any act.

Under the law, a slew of current practices, such as burying consent statements in small print, will also no longer be lawful. And failure to comply is backed by a fine of up to €20 million (Dh90.26m) or 4 per cent of annual turnover, whichever is the larger.

Scientists have wrestled with this problem for decades. In the 1970s, so-called expert systems seemed to offer a solution as they were based on rules of logic whose conclusion­s could be analysed. But in the 1990s the limited abilities and sheer effort of creating expert systems meant they lost out to neural networks, which need only off-the-shelf programs plus training data to teach them.

Neural networks have attracted companies from finance and medicine to security. But now the fact is they have no idea what their black boxes are doing. One reason why was discovered by scientists who found the better an AI is at its task, the less explicable its decisions become. The US Defence Research Projects Agency is funding research to close the gap. The motivation is simple – unable to explain itself, AI will never be fully trusted by its human creators.

For those now using AI to make money, the hope must be that computer scientists find ways of keeping them on the right side of the law. After 50 years, 2001: A Space

Odyssey is remembered for its vision of the power of technology. But perhaps its greatest legacy is its warning about knowing just what is going on inside the likes of HAL.

Neural networks are good at tasks but it is impossible to be sure how they make decisions

 ?? Metro-Goldwyn-Mayer ?? HAL, the ship’s computer in 2001: A Space Odyssey, is a neural network device that turns against the crew
Metro-Goldwyn-Mayer HAL, the ship’s computer in 2001: A Space Odyssey, is a neural network device that turns against the crew

Newspapers in English

Newspapers from United Arab Emirates