The Atlanta Journal-Constitution

Scientists gauge safety of driverless cars in ‘Grand Theft Auto’

Researcher­s get to ‘crash as many cars as we want.’

- By Jason Laughlin

A sporty PHILADELPH­IA — black sedan speeds dangerousl­y close to a cliff on a road winding through an arid landscape. The car recovers and swerves back onto the cracked asphalt, but another sharp turn is coming. It straddles the edge of the cliff, its tires spinning through pale, sunburned sand. Then it falls. Sage brush and rock outcroppin­gs blur past as it plummets. No driver emerges from the car. No police show up. A virtual sun keeps beating down. The crash occurred in a modified “Grand Theft Auto” video game, an example of the virtual simulation­s researcher­s at University of Pennsylvan­ia are running to evaluate autonomous vehicles, a technology that in the coming years could transform how Americans get around. “We can crash as many cars as we want,” said Rahul Mangharam, associate professor at University of Pennsylvan­ia’s department of electrical and systems engineerin­g. Mangharam and his team of six are pursuing what they describe as a “driver’s license test” for self-driving cars, a rigorous use of mathematic­al diagnostic­s and simulated reality to determine the safety of autonomous vehicles before they ever hit the road. Complicati­ng that task is the nature of the computer intelligen­ce at the heart of the car’s operation. The computer is capable of learning, but instead of eyes, ears, and a nose, it perceives reality with laser sensors, camerasjur and infrared. It does not see or process the world like a human brain. Working with this mystery that scientists call “the black box” is a daunting, even spooky, element of the work at Penn. “They’re not interpreta­ble,” Mangharam said. “We don’t know why they reached a certain decision; we just know they reached a certain decision.” Clarity on how safe driverless vehicles can be is a critical step to maturing a technology many think will some day save thousands of lives. Last year, 37,461 people died in vehicle crashes in the United States, the National Highway Traffic Safety Administra­tion reported. About 94 percent of crashes happen because of human mistakes, NHTSA has found, and self-driving cars hold the promise of preventing many of those deaths. The technology is not yet ready for prime time, most experts agree, and a premature introducti­on could result in deaths. Experts fervent in the belief that driverless cars eventually will save lives fear fatal crashes caused by autonomous system failures could scare the public and delay adoption of the technology for years. “They need to be very transparen­t in the developmen­t of this technology,” said Leslie Richards, Pennsylvan­ia’s transporta­tion secretary. “To get the public buy-in, people do need to understand.” In Pennsylvan­ia, much attention related to autonomous cars is focused on Pittsburgh, where last year Uber began operating self-driving cars and Carnegie Mellon University has positioned itself as a leader in the field. Penn and Carnegie Mellon are working together on Mobility21, a five-year, federally funded, $14 million program to explore transporta­tion technology, including self-driving vehicles. While colleagues at Carnegie Mel- lon experiment with their own autonomous car, Penn’s scientists work in a lab that looks like a middle schooler’s rec room. Whiteboard­s are covered in complex equations, but the shelves hold jury-rigged toy cars, and the computer screens display video games. It’s all in service of rating robot drivers, how much variation in the environmen­t and in the car itself the system can withstand without a failure. The researcher­s virtually drive cars in different weather and lighting — testing how well the software works with the changes it would face in the real world. “You can never have 100 percent safety,” Mangharam said. “You can design a system that would not be at fault intentiona­lly.” Mangharam describes autonomous vehicles as continuous­ly executing a threestep process. The first step is perception, the system’s attempt to understand what is in the world around it. It should be able to spot a stop sign and other vehicles on the road. Then, data gathered is used to make a plan, which starts with the destinatio­n, formulates a route, and then decides how to navigate that route. The car decides what speed, braking, and trajectory are needed to, for example, get around a slow-moving car on the highway while trying to reach an off-ramp. The third step is the process of driving, the applicatio­n of brakes, gas, and steering to get where the vehicle is directed to go. The Penn scientists run the autonomous driving software, called Computer Aided Design for Safe Autonomous Vehicles, through both mathematic­al diagnostic­s and the virtual reality test drives on “Grand Theft Auto” to see where the system fails. The video game is particular­ly useful because the autonomous driving system can be rigged to perceive it similarly to reality and because the virtual environmen­t can be perfectly controlled by scientists. The autonomous driver’s inscrutabl­e nature can be challengin­g, though. Deep neural networks teach themselves how to identify objects through a process of trial and error as they are fed thousands of images of people, trees, intersecti­ons _ anything a car may encounter on the road. It becomes increasing­ly accurate the more examples it is fed, but humans cannot know what commonalit­ies and features a machine is fixating on when it perceives a tree, for example, and correctly labels it as such. They are almost certainly not the features a human uses _ a trunk, leaves, the texture of bark _ to distinguis­h between a tree and a telephone pole. Because of the uncertaint­y about how the robot driver is identifyin­g objects, researcher­s are concerned that it might be come to the right answer, but for the wrong reasons. Mangharam used the example of a tilted stop sign. Under normal circumstan­ces, the computer could recognize a stop sign correctly every time. However, if the sign were askew, that could throw off the features the computer uses to recognize it and a car could drive right past it. Scientists need to understand not just what the car does wrong, but also at what stage of the driving process the error happens. “Was the cause of the problem that it cannot perceive the world correctly and made a bad decision, or did it per- ceive the world correctly and make a bad decision?” Mangharam said. Pennsylvan­ia has passed legislatio­n governing the testing of autonomous vehicles on the state’s roads, and Richards said she and her counterpar­ts in other states frequently talk about what kind of regulatory framework might be needed as full autonomy becomes closer to reality. A driver’s license test, as Mangharam proposes, is one possibilit­y, though she said it would likely require cooperatio­n between states and the federal government to decide on safety standards. The standards would have to consider that at least initially, she said, robot drivers would probably share the road with many humans behind the wheels of other cars. “We all know that any incidents of hazards tied to autonomous or collective vehicles will set everybody back,” said Richards, who is convinced the technology will ultimately save lives. “We really want to proceed as cautiously as possible to maintain this positive moment.” A common question about driverless vehicles is how soon the general public will start using them. As much as he believes in autonomous technology, Mangharam is worried by a tendency in our society to leave regulatory oversight in the dust as we embrace a new toy. “I don’t think we should be focusing on a date,” he said, “until we reach some safety threshold.”

 ?? CONTRIBUTE­D ?? A scene from the Grand Theft Auto video game.
CONTRIBUTE­D A scene from the Grand Theft Auto video game.
 ??  ?? Grand Theft Auto.
Grand Theft Auto.

Newspapers in English

Newspapers from United States