Its Nature and Future Margaret A Boden Oxford University Press 2016 Hb, 198pp, ilus, ind, £12.99, ISBN 9780198777984
Optimists believe that artificial intelligence (AI) will help overcome almost every issue facing humanity, including war, pestilence, hunger and even, by uploading consciousness, death. Pessimists envisage a future closer to Skynet where the machines take control. I’m sceptical: AI, at least in our lifetimes, is proving to be neither as good nor as bad as optimists and pessimists predict. Indeed, I have a sneaking suspicion that if AI ever reaches the kitchen we’ll drink, as Arthur Dent found, “a liquid that was almost, but not quite, entirely unlike tea”.
One of the problems facing any discussion of AI is defining what ‘intelligence’ actually is: it’s a notoriously subtle, multifaceted and enigmatic concept. I love pondering a chess problem. Yet I can set my Fritz chess program to beat me easily. My social and emotional intelligence would get me expelled from the Amalgamated Union of Wallflowers, Recluses and Associated Timid People for being too shy. Yet AI is beginning to model emotional intelligence, such as when ‘computer companions’ respond with sympathy or sexually alluring behaviours and speech. Sooner or later they’ll beat me there as well, I suspect. After all, they’re all facets of intelligence, all depend on processing data, all rely on evaluating information. But are they really more ‘intelligent’ in the sense we generally use the term? I hope not.
Boden defines AI as seeking “to make computers do the sorts of things that minds can do”. She points out that intelligence is “a richly structured space of diverse information-processing capacities”. This definition allows Boden to cover the five major types of AI: classical AI; artificial neural networks; evolutionary programming; cellular automata; and dynamical systems. While the forms and uses of AI differ widely, all are essentially systems that process information.
Often portrayed as the wave of the future, many aspects of AI have a surprisingly long intellectual heritage. In the 1840s, for example, Lady Ada Lovelace predicted elements that now form part of the foundations of AI – such as processing symbols that potentially represent “all subjects in the universe”. Lovelace’s interest in logic inspired her description of several basic programming concepts, including stored programs, hierarchical subroutines and bugs. In the late 1950s, Arthur Samuel developed a program that beat its creator at draughts.
By the 1960s “an intellectual schism” had developed between AI researchers interested in life, who worked in cybernetics, and those interested in mind, who worked on symbolic computing. Researchers interested in networks covered mind and brain. But as they mainly studied associative learning, they were closer to cybernetics. Boden notes that “there was scant mutual respect between these increasingly separate subgroups”. She eloquently traces the implications and developments that arose from this schism. While AI discussions inevitably focus on the future, Boden traces the discipline’s fascinating history, which helps place all the hype in context.
We might not have the T-800, Marvin or Deep Thought. But AI drives robots on Mars, animates Hollywood movies, distracts you with mobile phone apps, hopefully gets you to where you’re going with sat-navs, and predicts stock market movements. AI is already so ubiquitous that, arguably, we all need at least a passing acquaintance with its core concepts, ideas and trends. Boden’s book is an excellent, accessible introduction even for the complete AI novice.
Less prosaically, Boden notes that philosophers use AI concepts to help illuminate issues such as free will, consciousness and human creativity. Biologists use AI to model aspects of living organisms and hopefully better understand the elusive nature of life. Indeed, despite AI’s achievements, Boden’s eloquent book also shows just how remarkable the human brain really is. “AI has taught us that human minds are hugely richer, and more subtle, than psychologists previously imagined,” she writes. “Indeed that is the main lesson to be learned from AI” (italics in original).