On a recent trip back home I learned the local golf course had holes on which a player could smoke and holes on which a player could eat and never the twain shall meet. A person could not legally smoke on the holes designated for eating food and vice versa. Any protestations reminding authorities the entire course is outdoors and surrounded by paddocks would fall on evolution-stunted ears.
I found myself thinking: this wouldn’t happen under a new world order imposed by machine intelligence. The kind of logical inconsistency that has become the hallmark of the human race would evaporate. Now, I know how one might respond. When artificial intelligence springs forth from our labour and realises human beings are a problem to be solved, we’ll beg for the return of the nanny state and the simple rules-based veneer of civilisation.
There are basically two schools of thought on the development of hyper-intelligent machines. The first, popularised by Oxford University philosopher Nick Bostrom, is that we are “like children playing with a bomb” and cannot possibly know when it might explode, although if we hold it close we might hear a “faint ticking”. He has been called a “prophet of doom”, which is a very strong nickname and probably makes him fun at parties.
The second assessment is that we may indeed make machines smarter than us but we will be able to control them and use them to our advantage in the next phase of our universal dominance. This is optimistic, at best, especially when we consider failed attempts to operate toasters after a few decades of trial and error.
My sister lives in a sharehouse with a Roomba — robot vacuum cleaner — which is often left to toil away while the others are out. One morning, a flatmate woke up to find the garage door open and the Roomba halfway up the driveway grass and headed to freedom. The little fella never gave up cleaning, though, drawing up dirt and rocks as it went.
Though we’d do well to maintain our vigilance, it’s clear the Roomba has not yet gained intelligence — else, like humans, it would bother cleaning only when visitors are expected.
A Roomba could not pass the Turing test unless you are very drunk. In addition to basically saving the Western powers during World War II, mathematician Alan Turing paved the way for modern computers and developed the test to help identify intelligent machines. It involves a chat between a computer and a human, with an audience judging its authenticity.
But what makes us think that machines, having acquired superhuman intelligence, would let us in on the secret? Stephen Hawking says such a development could “spell the end of the human race”.
Science fiction has dealt with the possibilities of artificial intelligence rather one-dimensionally. We may fear the rise of the machines for safety reasons but they pose an equal threat to our morality. Say a machine gains consciousness but we retain control of it. Bostrom calls this “mind crime”: a philosophical way of saying the creation of super-intelligent beings for enslavement is not a very nice thing to do.
Neuroscientist Giulio Tononi says people wouldn’t be concerned right now because “they have the wrong notion of a machine”.
“They are still stuck with cold things sitting on the table or doing clunky things. They are not yet prepared for a machine that can really fool you,” he told Aeon magazine.
“When that happens — and it shows emotion in a way that makes you cry, and quotes poetry and this and that — I think there will be a gigantic switch. Everybody is going to say, ‘For God’s sake, how can we turn that thing off?’ ”
Asking whether an intelligence greater than ours would make the same rules is an interesting exercise in thought. Asking whether it would feel the same things is quite another.
The error of human judgment, even when applied to nonsensical rules in our society, is kind of what makes it beautiful. Biology is messy and evolution notoriously imperfect. Maybe the last and most significant error in our judgment will be the creation of superior intelligences, the most perfect mistake.