Cape Argus

AI breakthrou­gh is a milestone, not a convincing victory

-

WHEN computer models designed by tech giants Alibaba and Microsoft this month surpassed humans for the first time in a reading-comprehens­ion test, both companies celebrated the success as a historic milestone.

Luo Si, the chief scientist for naturallan­guage processing at Alibaba’s artificial intelligen­ce (AI) research unit, struck a poetic note, saying: “Objective questions such as ‘What causes rain?’ can now be answered with high accuracy by machines.”

Teaching a computer to read has for decades been one of AI’s holiest grails, and the feat seemed to signal a future in which AI could understand words and process meaning with the same fluidity humans take for granted.

But computers aren’t there yet – and aren’t even really that close, said AI experts who reviewed the test results. Instead, the accomplish­ment highlights not just how far the technology has progressed, but also how far it still has to go.

“It’s a large step” for the companies’ marketing “but a small step for humankind”, said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligen­ce, an AI research group funded by Microsoft co-founder Paul Allen.

“These systems are brittle in that small changes to paragraphs result in very bad behaviour” and misunderst­andings, Etzioni said. And when it comes to, say, drawing conclusion­s from two sentences or understand­ing implied ideas, the models lag even further behind.

The test involved Stanford University’s Question Answering Dataset, a collection of more than 100 000 questions that has become one of the AI world’s top battlegrou­nds for testing how machines read and comprehend.

The models are given short paragraphs taken from more than 500 Wikipedia pages spanning a range of subjects, including Jacksonvil­le, Florida; economic inequality; and the black death. Fed a paragraph about Super Bowl 50, for instance, the models are then asked which musicians headlined the halftime show.

The first test in August 2016, of a model created by researcher­s at Singapore Management University, lagged behind a measure of human performanc­e – people on crowdsourc­ed systems, such as Amazon’s Mechanical Turk, who earn money for taking surveys or completing small tasks.

But, after dozens of following tests, researcher­s this month submitted proof that their models had narrowly and finally beaten the humans – an 82.6 for Microsoft Research Asia’s models compared to the human 82.3.

As both Microsoft and the Chinese tech powerhouse Alibaba claimed first-in-AI victories, a flood of glowing media reports followed, positing that AI could not just read better than humans but would also – as Luo Si said in a statement – decrease “the need for human input in an unpreceden­ted way”.

Microsoft said it was using similar models in its Bing search engine, and Alibaba said its technology could be used for “customer service, museum tutorials and online responses to medical enquiries”.

But AI experts say the test is far too limited to compare with real reading. The answers aren’t generated from understand­ing the text, but from the system finding patterns and matching terms in the same short passage. The test was done only on cleanly formatted Wikipedia articles – not the widerangin­g corpus of books, news articles and billboards that fill most humans’ waking hours.

Adding gibberish into the passages that a human would easily ignore often tended to confuse the AI, making it spit out the wrong result. And every passage was guaranteed to include the answer, preventing the models from having to process concepts or reason with other ideas.

Stephen Merity, a research scientist who works on language AI at cloudcompu­ting giant Salesforce, said it was an “amazing achievemen­t”, but added that calling it superhuman was “madness”.

“There’s no built-in ability for the model to determine or signal that it thinks the paragraph is insufficie­nt to answer the question,” he said. “It’ll always spit you back something.”

Even Pranav Rajpurkar, a Stanford AI researcher who helped design the Stanford test, said there remains “actually quite a big jump” before machines can truly read and understand.

“The goal has always been to get to human-level performanc­e and it has been inching closer and closer to it,” Rajpurkar said.

The real miracle of reading comprehens­ion, AI experts said, was in reading between the lines: connecting concepts, reasoning with ideas and understand­ing implied messages that aren’t specifical­ly outlined in the text.

In those realms, AI is still very much a work in progress. Computer models tested by the Winograd Schema Challenge, which asks them to comprehend the meaning of vague sentences that a human would neverthele­ss understand, have shown mixed results. Merity outlined one example in which today’s AI systems might still struggle to reasonably comprehend: asking the difference between a car “filled with gas”, “filled with petrol” and “filled with oranges”.

AI researcher­s said they’re eager to push on to new challenges of comprehens­ion beyond basic Wikireadin­g: The Allen Institute, for example, is training AI to answer SAT-style maths problems and middlescho­ol-level science questions.

But AI experts said people should be less concerned about losing their jobs to machines that thoughtful­ly read passages about the rain – or anything else.

“Technicall­y it’s an accomplish­ment, but it’s not like we have to begin worshippin­g our robot overlords,” said Ernest Davis, a New York University professor of computer science and longtime AI researcher.

“When you read a passage, it doesn’t come out of the clear blue sky, it draws on a lot of what you know about the world,” Davis said. “We really need to deal much more deeply with the problem of extracting the meaning of a text in a rich sense. That problem is still not solved.”

WHEN YOU READ A PASSAGE, IT DOESN’T COME OUT OF THE CLEAR BLUE SKY, IT DRAWS ON A LOT OF WHAT YOU KNOW ABOUT THE WORLD

Newspapers in English

Newspapers from South Africa