Keeping a check on amoral algorithms
Several years ago I played ice hockey. I was very bad at it. Nonetheless, at some point during an especially fast and furious match — we called it an “industrial” match — somewhere in Western Ontario, the blade of my hockey stick slashed the face of an opponent. The slash drew blood. My opponent, who was probably twice my size, was livid. He dropped his gloves, code-speak for being ready to fight.
I came really close to being beaten to within an inch of my life. Somewhat tragically, sometimes an ice hockey match in rural Canada is incomplete without a fight.
It was an accident, I told the referee. Your stick, he said, is an extension of your arm. You have to be in control of it at all times. I was sent off the ice and left the arena by the back door.
What is the point in all of this, other than misguided icecapades? Well, in a highly competitive, fast-paced ice hockey match, where everyone has some basic form of protection, there are rules and regulations. In a hockey game, the stick is an instrument, a low-tech device, and its use has to be regulated.
The stick on its own can be trusted, but the person wielding it has to be responsible for its use at all times.
And so we get to artificial intelligence (AI), automation and robotics. From the algorithms that track and collect our data whenever we visit a website, to the drones that drop bombs on people in remote towns and villages — and soon probably our Amazon orders — there is enormous potential for abuse, lapses in morality and ethical conduct and, well, accidents.
As in ice hockey, there have to be regulations, and someone has to be held accountable.
With good reason, many of us marvel at the novelties of information and communications technology and the Candy Crush revolution.
The wow gap between the thrill of Facebook’s special offers on your timeline after your Google search for a new laptop grows ever wider, until we accept it as normal.
The point is that Facebook and Google are two separate and independent companies. How does Facebook know that I searched for a new laptop on Google?
Without sounding conspiratorial, someone is watching our activities online. Apersonal, and by extension amoral, algorithms are tracking our tastes and preferences, where and when we travel, and probably our most intimate discussions. You can be rude, obnoxious or offensive to a person, but an algorithm that tracks your movements or preferences has no feelings.
One of my secret and seemingly incongruous intellectual passions is war, warfare, the philosophy of war and its impact on society. And, in the age of 21st-century robotics, I have developed an interest in the ethics of war. As a pacifist, I have tried to not share that too widely.
Which brings me to the use of drones and remotecontrolled vehicles. The use of driverless cars is in its early stages of development. There are broad and deep discussions that can be had on these.
The use of drones to drop bombs or in targeted assassinations is becoming cause for concern among scholars and thinkers on war and strategy. The general drift of discussions around the issue is that it decentres us from war and its consequences and, along the way, disassociates pilots from reality and removes soldiers from battlefields. It’s all in the game.
Most people will agree that as soon as a safety device is designed the bad guys devise ways around it.
At some point in the not too distant future all the glitches will have been removed, and driverless cars, buses and cargo ships — and possibly aircraft – could be driven by AI systems and guided by satellites.
This will no doubt continue to drive the information and communication technology revolution and the “new globalisation”. But we cannot ignore the likelihood that there may be bad guys who direct car bombs or drones into buildings and villages.
Worse still, if we contract out decision-making on, say, going to war to an automated system (driven by amoral algorithms) you remove the human element. This is already happening on a small scale with targeted assassinations in western Asia.
A second concern, seemingly far-fetched but not unimaginable, is when machines make the actual decisions by themselves. We cannot roll back scientific discoveries or technological advances, but we can, and probably should, regulate the application of new technologies.
If I may use the hockey stick analogy, the algorithms and drones are extensions of states and firms. A machine has no feelings, morals and ethics, but its designers, users and owners do — and they have to be held accountable.
WE CANNOT ROLL BACK SCIENTIFIC DISCOVERIES OR TECHNOLOGICAL ADVANCES, BUT WE CAN … REGULATE THE APPLICATION OF NEW TECHNOLOGIES