The dark side of Artificial Intelligence is Doomsday scary
God be with the days when AI meant a fella coming to get the cows pregnant. Now it’s about the possible extinction of the human race. Artificial intelligence is everywhere. It’s in banks, in cameras on the street. All over your social media. It can make a medical diagnosis or compose music or play chess. If artificial intelligence was a TV character it would be a mix of Sheldon from The Big Bang Theory and a handsome doctor from Grey’s Anatomy.
New York City Council is currently going through a process to establish exactly how many algorithms are involved in the governance of the city: because they don’t know how many there are. They do know that it’s involved in the allocation of police officers, food stamps and public housing. In other parts of the United States, police departments have commissioned data companies to come up with ways of predicting crime. Yes, like the Tom Cruise film.
Yet this isn’t the big one: all the above are narrow artificial intelligences. They have a specific set of functions and are only as good or as bad as the people who programmed them. ( Though they can learn. Google’s Translate tool made up its own language through which it translates the languages us fleshbots speak.)
What scientists, philosophers, politicians and anyone with a healthy interest in doomsday really worry about is artificial general intelligence: a computer that thinks for itself.
This is not a science fiction fantasy. The overwhelming scientific consensus is that the spooky- sounding singularity will occur this century, possibly around 2050.
It’s where technology and philosophy crash into each other, because a super- intelligent AI will be able to build an even more intelligent AI, and so on and so on – ending up with an artificial intelligence that has God- like powers. It’ll be able to wipe out all disease and end climate change, bring about world peace and produce a new, even better series of The Sopra- nos. A group in Silicon Valley has already established a religion in advance of its arrival. Some propose that we should ask the AI what we want, because it will know better. It will be able to resurrect dead people or create Matrix- like realities for humans that we won’t be able to distinguish from the real thing.
It may even have done so already. There’s a thought exercise called Roko’s Basilisk, which is head- meltingly complex, but part of it is that we only think AI has not emerged yet. The AI has in fact created a pre- AI reality to test how humans will react to the possibility of AI. If you’re not keen, then the Basilisk won’t be too pleased with you, as you’re in favour of denying it existence.
This would be a particularly needy super- intelligence, but it’s striking how many of the world’s leading scientific thinkers – Bill Gates, Elon Musk, Tim Berners- Lee and the late Stephen Hawking – all worry that an AI will regard us as annoying bugs.
There’s another theory that the reason why we haven’t found life on other planets is that they already created AI: and the AI wiped them out.
There are some optimists, of course, but it’s striking how the dark predictions describe an entity like the pre- Christian or Old Testament Gods. The AI is driven by cold logic, yet can still act in apparently petty and jealous ways. It’s a reflection of the worst impulses in humanity, but one that has come about as a result of what’s most impressive about us: our creativity. If this happens, we’ll have made a God – and presumably, wiped out belief in the old lo- tech deity. Yet we’ll fear it. Just the way we fear ourselves.
There’s another theory that the reason why we haven’t found life on other planets is that they already created AI: and the AI wiped them out