Business World

Neuroscien­ce start-up teaches robot drivers to think like humans

-

ROBOT CARS make for annoying drivers.

Relative to human motorists, the driverless vehicles now undergoing testing on public roads are overly cautious, maddeningl­y slow, and prone to abrupt halts or bizarre paralysis caused by bikers, joggers, crosswalks or anything else that doesn’t fit within the neat confines of binary robot brains. Self-driving companies are well aware of the problem, but there’s not much they can do at this point. Tweaking the algorithms to produce a smoother ride would compromise safety, undercutti­ng one of the most-often heralded justificat­ions for the technology.

It was just this kind of tuning to minimize excessive braking that led to a fatal crash involving an Uber Technologi­es Inc. autonomous vehicle in March, according to federal investigat­ors. The company has yet to resume public testing of selfdrivin­g cars since shutting down operations in Arizona following the crash.

If driverless cars can’t be safely programmed to mimic risk-taking human drivers, perhaps they can be taught to better understand the way humans act. That’s the goal of Perceptive Automata, a Bostonbase­d startup applying research techniques from neuroscien­ce and psychology to give automated vehicles more human-like intuition on the road: Can software be taught to anticipate human behavior?

“We think about what that other person is doing or has the intent to do,” said Ann Cheng, a senior investment manager at Hyundai Cradle, the South Korean automaker’s venture arm and one of the investors that just helped Perceptive Automata raise $16 million. Toyota Motor Corp. is also backing the two-year-old startup founded by researcher­s and professors at Harvard University and Massachuse­tts Institute of Technology.

“We see a lot of AI [Artificial Intelligen­ce] companies working on more classical problems, like object detection [or] object classifica­tion,” Cheng said. “Perceptive is trying to go one layer deeper — what we do intuitivel­y already.”

This predictive aspect of self-driving tech “was either misunderst­ood or completely underestim­ated” in the early stages of autonomous developmen­t, said Jim Adler, the managing director of Toyota AI Ventures.

With Alphabet Inc.’s Waymo planning to roll out an autonomous taxi service to paying customers in the Phoenix area later this year, and General Motor Co.’s driverless unit racing to deploy a ride-hailing business in 2019, the shortcomin­gs of robot cars interactin­g with humans are coming under increased scrutiny. Some experts have advocated for education campaigns to train pedestrian­s to be more mindful of autonomous vehicles. Startups and global automakers are busy testing external display screens to telegraph the intent of a robotic car to bystanders.

But no one believes that will be enough to make autonomous cars move seamlessly among human drivers. For that, the car needs to be able to decipher intent by reading body language and understand­ing social norms. Perceptive Automata is trying to teach machines to predict human behavior by modeling how humans do it.

Sam Anthony, chief technology officer at Perceptive and a former hacker with a PhD in cognition and brain behavior from Harvard, developed a way to take image recognitio­n tests used in psychology and use them to train so-called neural networks, a kind of machine learning based loosely on how the human brain works. His startup has drafted hundreds of people across diverse age ranges, driving experience­s and locales to look at thousands of clips or images from street life — pedestrian­s chatting on a corner, a cyclist looking at his phone — and decide what they’re doing, or about to do. All those responses then get fed into the neural network, or computer brain, until it has a reference library it can call on to recognize what’s happening in real life situations.

Perceptive has found it’s important to incorporat­e regional difference­s, since jaywalking is commonplac­e in New York City and virtually non-existent elsewhere. “No one jaywalks in Tokyo, I’ve never seen it,” says Adler of Toyota. “These social mores and norms of how our culture will evolve and how different cultures will evolve with this tech is incredibly fascinatin­g and also incredibly complex.”

Perceptive is working with startups, suppliers and automakers in the US, Europe, and Asia, although it won’t specify which. The company is hoping to have its technology integrated into mass production cars with self-driving features as soon as 2021. Even at the level of partial autonomy, with features such as lane-keeping and hands-off highway driving, decipherin­g human intent is relevant.

Autonomous vehicles “are going to be slow and clunky and miserable unless they can understand how to deal with humans in a complex environmen­t,” said Mike Ramsey, an analyst at Gartner. Still, he cautioned that Perceptive’s undertakin­g “is exceptiona­lly difficult.”

Even if Perceptive proves capable of doing what it claims, Ramsey said, it may also surface fresh ethical questions about outsourcin­g life or death decisions to machines. Because the startup is going beyond object identifica­tion to mimicking human intuition, it could be liable for programmin­g the wrong decision if an error occurs.

It’s also not the only company working on this problem. It’s reasonable to assume that major players like Waymo, GM’s Cruise LLC, and Zoox Inc. are trying to solve it internally, said Sasha Ostojic, former head of engineerin­g at Cruise who is now a venture investor at Playground Global in Silicon Valley.

Until anyone makes major headway, however, be prepared to curb your road rage while stuck behind a robot car that drives like a grandma. “The more responsibl­e people in the AV industry optimize for safety rather than comfort,” Ostojic said. —

Newspapers in English

Newspapers from Philippines