The driverless cars that are too good for the rest of us
They get in more crashes than human-driven vehicles because they are not programmed to bend rules of the road now and again
Minor car accidents are not typically deemed newsworthy enough to warrant international coverage, but that’s what happened last week when a small shuttle bus bumped into a delivery lorry in Las Vegas. The key difference this time was that the bus had no driver. In a trial of self-driving vehicle technology in the city, the bus was fitted with a series of sensors and processors that allowed it to navigate a small loop of roads, ferrying visitors around without anyone at the wheel. The crash was also particularly noticeable because it occurred just an hour into the first day of the poor vehicle’s trial, a debut even most learner drivers would be embarrassed by.
Except for one thing: the crash was not the driverless car’s fault. The delivery lorry, and its human driver, reversed into the shuttle, having failed to see it. There were no injuries, and in fact, the driverless technology worked as required: the shuttle stopped as it sensed the lorry reversing in its direction. It just couldn’t do anything about the other driver’s carelessness.
The incident is only the latest in a string of driverless car accidents that have one clear thread running through them: it was the other guy’s fault. Earlier this year, a driverless car being tested by Uber in Arizona was flipped on its side when driving through a yellow light, after a human-driven car attempting to cross the junction crashed into it. The handful of incidents that Google’s autonomous vehicles have been involved in have almost all been caused by other cars.
These incidents might appear to make the arguments for driverless vehicles stronger: robots make better drivers than their fleshy counterparts. One could argue, we need more driverless cars on the road, and should, in fact, hasten their development.
However, it is not nearly that simple. The statistics show that driverless vehicles actually get into far more scrapes than human-driven ones, even if they are not technically at fault. A 2015 study from the University of Michigan’s Transportation Research Institute found that self-driving vehicles get into 9.1 crashes every million miles they drive, against 4.1 crashes for cars driven by humans.
This appears to be a contradiction. Driverless cars get into more collisions, but they are almost never the driverless car’s fault. How can they be safer, and yet be involved in more crashes?
The plausible answer is that driverless cars actually turn humans into worse drivers. While they are programmed never to speed, to give way to others as much as possible and generally to obey every rule of the road – in other words, to be perfect drivers – we are not.
And anyone who has ever seen a driverless car in action can attest to this: they turn in perfect circles, never cutting corners, and would certainly never jump a red light. If a person walks out in front of one, it will stop instantly, with superhuman reflexes.
But this creates problems for the rest of us. We have grown so used to interacting with other human drivers, anticipating their flaws and idiosyncrasies, that perfect robots have us out of sorts. Passengers on the driverless shuttle in Vegas did not remark at the lorry’s carelessness, but that their robot car failed to anticipate it.
In the case of the Uber crash earlier this year, the driver at fault had illegally cut across two lanes of traffic beforehand, but the humans in both lanes had seen this and held back; the driverless car had not.
The tech industry has a phrase for these kinds of problems: “You’re holding it wrong”, coined from Steve Jobs’ now notorious excuse given when a customer complained that his iPhone 4 wouldn’t pick up calls. It has now become a catch-all for blaming humans for technological faults. Driverless cars are typical of the “you’re holding it wrong” problem: the technology might work flawlessly, but the humans don’t.
When roads have no more human drivers on them, we are likely to be much safer – human error is involved in 90pc of accidents – but what about in the period until then?
The arrival of driverless cars won’t be like flicking a switch. There will be a transition period, most likely taking decades, between the first fully autonomous vehicles and the last human drivers on the road.
As well as potentially more accidents during this period, there may also be widespread frustration at cars obeying speed limits, or being too polite – in 2015, a Google driverless car was pulled over by police for driving too slowly. It’s possible that public opinion towards driverless vehicles, already potentially shady, could worsen because of their exceptional fidelity to the rules.
Tech companies are taking effort to combat this. The cars tested by Waymo, the unit spun out of Google last year, now drive more aggressively, cutting corners and inching forward at junctions.
It is a good example of how technologists have to understand the imperfect world their technology inhabits, and adapt to it. For driverless cars to become a reality, they must deal with their biggest problem: the flaws of human beings.
‘We’re used to dealing with other drivers’ flaws, so perfect robots have us out of sorts’