Ottawa Citizen

Despite the hype, human eye still better than sensors on autonomous vehicles

- DAVID BOOTH twitter.com/MotorMouth­NP David@davebooth.ca Driving.ca

If there’s one thing the past five years of automotive technology has taught us, it’s that we human beings are deficient. Woefully deficient. Despite numerous automotive safety advances — airbags, anti-lock brakes, vehicle stability control systems, to name but a few — we’re still killing ourselves with remarkable regularity: 1,834 Canadians died in automobile crashes in 2014, 32,675 in the United States.

More troubling to our collective self-confidence, however, is that we’re now being told that we’re the primary cause of all these crashes and that a computer — sometimes it feels like any old computer would do — would be far more reliable than our own humble carbon-based selves. And the numbers back that contention up. The American National Highway Traffic Safety Administra­tion (the famed NHTSA) contends that 93 per cent of all fatal automotive collisions are the result of preventabl­e human error, the key word being preventabl­e.

The implicatio­n is obvious: We humans are too slow, too unreliable and, most recently, too distracted to be trusted with high-speed machinery. In a word — imply the studies — we’re inferior. Which made a University of Michigan report titled Sensor Fusion: A Comparison of Sensing Capabiliti­es of Human Drivers and Highly Automated Vehicles that came across my desk last week an absolute must read.

An exhaustive comparison was made between the technologi­es that let autonomous cars “see” and the good old human eye. My expectatio­ns were that the juxtaposit­ion would not only expose, but specifical­ly quantify, just how clearly deficient we humans really are compared with machines. Except that it didn’t. In fact — surprise, surprise — the human eye, according to study author Brandon Schoettle, is a pretty amazing sensor. Indeed, according to Schoettle, the human eye is in many ways vastly superior to the much ballyhooed Lidar, radar and camera systems being trumpeted as the future of driving in most regards.

SEE FURTHER

In normal circumstan­ces we can see further (1,000 metres compared with about 250m for the best of the automated seeingeye dogs), have a far wider field of vision, are better able to recognize what we are looking at and can track lanes far better than current automated systems. Only when it comes to our “dark or low-illuminati­on performanc­e” (essentiall­y, our night vision) does the human eye let down the side, able only to see 75 m with the help of automotive high beams.

Despite this one disadvanta­ge, it’s clear that the human eye is a truly amazing “sensor.” Multiple technologi­es — Lidar, radar and cameras — and even then, multiple examples of each, are required to match the distance and range of the human eye in detecting potential dangers.

For instance, even accounting for our slower reaction times (the study that concluded there is a 2.5 second gap between impetus and reaction that Schoettle attributes to humans had to have been conducted in Central Florida), our greater (daytime) vision would allow us to drive around at 405 km/h and still stop in time for a hazard we see 1,000 metres distant. By comparison, a faster-reacting radar system is only good for 215 km/h and a stereo camera-based system barely half of that.

Now, those are just theoretica­l calculatio­ns; I am assuming Schoettle didn’t actually conduct any experiment­s on public roads to verify his thesis. But it does point out that we humans see so much further ahead than automated systems that even our delayed reaction times, compared with digital drivers, don’t diminish the fact that we have more warning to potential hazards than automated systems.

Schoettle also points out the comparativ­ely narrow range of vision of current sensors. Besides our peripheral vision, a simple crick of our neck widens the human range of vision considerab­ly. And while we humans suffer from gaps in our vision — the “blind spots” caused by things such as C-pillars — autonomous systems also suffer limits in sensor coverage, most notably lineof-sight obstructio­ns. The study illustrate­d how significan­tly even two adjacent vehicles can cut down a self-driving car’s field of vision as compared with humans. In the cases illustrate­d by Schoettle, the other vehicle would have been visible to a human driver sooner than to a robotic one.

Nor does this purely technical sensor evaluation account for any of the intangible­s, i.e. human intuition. Schoettle cites the example of a vehicle about to make a left turn in front of traffic or about to cross a busy highway (the kind of situation that caught Tesla’s Autopilot out in 2016). An autonomous vehicle would have to wait until the offending vehicle crossed into its path, which is something the NHTSA says “challenges the system’s ability to perform threat assessment, the target usually recognized very late or not at all prior to impact.” But driving experience would already have the human driver monitoring the potentiall­y offending vehicle and subconscio­usly planning escape routes, especially if they were a completely paranoid biker, like Yours Truly. Future autonomous systems with artificial intelligen­ce (AI) may some day be able to emulate human intuition, but for now automotive technology can only react to situations, not anticipate them.

Indeed, after wading through 30 pages of Schoettle’s report, two conclusion­s jump out. One is that, as the author states, the most reliable road to full Level IV or V autonomy (that’s where a human doesn’t even have to be behind the wheel) is for all cars to be connected by dedicated short-range communicat­ions systems. By “talking” to each other, cars could “virtually” extend their sensor range to human levels and, combined with their quicker reaction times and greater processing power, truly deliver the greater safety that autonomous driving is touted to deliver. Without such a connection, sensor range would seem a notable deficiency, even compared to the human eye.

The other conclusion one can draw from the study — but which the author does not infer — is that, as Motor Mouth has long contended, the reason we’re being bombarded with selfdrivin­g technology is that we’re simply not concentrat­ing on our driving. As much of Schoettle’s study implies, it’s not that we are not capable of driving safely, it’s that — thanks to things like texting and impairment — we are unwilling.

Indeed, if there is a lesson from Sensor Fusion: A Comparison of Sensing Capabiliti­es of Human Drivers and Highly Automated Vehicles, it isn’t that computers are better at driving than we are, it’s that they’re more reliable.

There’s a compliment buried in there somewhere, alongside the jarring conclusion that, as usual, we are the architects of our own demise.

 ?? JUSTIN TALLIS/AFP/GETTY IMAGES ?? People look toward an autonomous self-driving vehicle, as it is tested in a pedestrian zone in Milton Keynes, north of London, England, in a trial by Transport Systems Catapult.
JUSTIN TALLIS/AFP/GETTY IMAGES People look toward an autonomous self-driving vehicle, as it is tested in a pedestrian zone in Milton Keynes, north of London, England, in a trial by Transport Systems Catapult.
 ??  ??

Newspapers in English

Newspapers from Canada