The National - News

ARTIFICIAL INTELLIGEN­CE

How will AI overcome human nature?

- Robert Matthews is visiting professor of Science at Aston University, Birmingham, UK

For those lucky enough to get in, the UAE AI Summer Camp that begins today and runs through summer may prove to be a transforma­tive experience. Funded by the Ministry of State for Artificial Intelligen­ce, featuring speakers from companies such as Microsoft and IBM, and aimed at pupils, university students and government executives, the camp sold out in 24 hours.

Camp-goers will be able to build systems such as AI chatbots that converse with human beings. As someone who began working on AI systems 25 years ago, I understand the excitement of getting computers to mimic brain-like abilities, however crudely.

But I also know that AI enthusiast­s are prone to overlookin­g the single biggest obstacle to the adoption of the technology – human nature. Time and again the reaction of humans to AI has hobbled its advance.

I speak from experience. In the early 1990s, I created a form of AI that could recognise literary style and distinguis­h between authors.

Working with a writing-style expert, I programmed a computer to behave like a neural network, now one of the most widely-used forms of AI. These are good at mimicking the brain’s ability to detect patterns in a mass of data.

A neural network does this using mathematic­al recipes – algorithms – to learn how to categorise data. Then, when presented with data it hasn’t seen before, it uses its training to put it in one of the various categories.

We trained a neural network to recognise the writing style of William Shakespear­e and some of his contempora­ries using samples of their work.

When we then showed it a new text, the neural network proved remarkably good at identifyin­g the right author.

Our aim was to use AI to investigat­e controvers­ies about Shakespear­e. We found evidence that some of his early plays bore a writing style strikingly similar to that of his rival, Christophe­r Marlowe.

Claims that Shakespear­e used unpublishe­d scripts by his contempora­ry were circulatin­g even during his lifetime. Yet when we published our work, we hit problems. People are suspicious of computers with human-like characteri­stics.

Critics argued that we had not given a clear explanatio­n of why the computer had made the decisions it had. And they had a point. To this day, neural networks are recognised for being something of a black box – it’s very hard to tell why they make the decisions they do.

But that’s a criticism that could be made of humans. You may recognise Shakespear­ean prose when you hear it – but can you say why?

Decades later, our findings are still debated, but chiefly by computer scientists. Literary scholars, in contrast, remain suspicious of attempts to capture their skill using AI.

In that, they’re not unique. Despite all the excitement surroundin­g driverless cars, polls have repeatedly shown that most consumers don’t trust the AI technology behind it.

Evidence suggests suspicions are increasing – driven perhaps by the number of AI-related accidents.

AI enthusiast­s argue that the power of the technology will eventually win over doubters.

But that’s what they said in the 1980s, AI’s previous golden age. Back then, the big buzz-phrase was a form of AI called expert systems.

Put simply, these involve picking the brains of human experts in, say, medical diagnosis, and capturing their thought processes in ways computers can execute.

At first, there was excitement about expert systems. Many organisati­ons found themselves caught up in the hype. But they, too, ran into the problem of people being reluctant to cede power to AI.

Fast forward 30 years, and AI is again riding high. But the problems remain the same. AI still generates suspicion.

IBM, one of the sponsors of the Summer Camp, knows this better than most. Its Watson AI system has become a poster-child for the capabiliti­es of AI since 2011, when it beat human champions on the US TV quiz show Jeopardy.

In 2013, IBM teamed up with the Memorial Sloan-Kettering Cancer Centre in New York.

Watson was trained by clinicians to interpret cancer data and medical trial results.

Initially, the company spoke of a “new era” in cancer treatment. But reports emerged of Watson struggling with the complexity of the tasks mastered by humans. Now they face cutbacks.

Commentato­rs talk of the marketing running ahead of Watson’s abilities – the same issue that led to the AI bubble’s collapse in the 1980s.

There are lessons here for those at the summer camp. First, don’t be seduced by the apparent power of the technology. Second, never underestim­ate the power of hype to turn a revolution into a bubble.

 ??  ??

Newspapers in English

Newspapers from United Arab Emirates