Autocar

USING AI TO MAKE AUTONOMOUS VEHICLES REACT LIKE HUMANS

-

IT’S PROBABLY FAIR to say that the concept of self-driving cars is one of the most controvers­ial subjects in the automotive industry. Although the idea is that autonomy equals safety, with an ultimate goal of accident-free roads, the notion of a car being capable of the same split-second decision-making as a human, both from a safety and an ethical point of view, seems far-fetched.

More knowledge of how robot cars can be made to work would help, and there’s an insight into that from a project called D-risk. The project aims to compile the world’s largest library of real-life nearmisses, known in the world of autonomous vehicle developmen­t as ‘edge cases’. In order for autonomous vehicles to cope with the weirdest of scenarios that a human takes at face value and deals with on the fly, they will need experience of some of those scenarios to cope.

This is where the difference between artificial intelligen­ce (AI) and merely programmin­g a computer to perform a series of tasks is important. AI involves machine learning – something that a straight-forward computer, even a very powerful one, isn’t able to do. If a bogstandar­d computer is programmed with a series of scenarios, it can recognise them when they crop up, but if the scenario changes slightly, the computer might not. An artificial­ly intelligen­t machine (such as an autonomous vehicle), on the other hand, can take what it has learned and figure out how to cope with a similar but not identical scenario in much the same way as a human driver. The more diverse the informatio­n an AI system is fed, the more capable it will be of adapting to slightly different scenarios.

Run by urban innovation firm DG Cities, together with a number of partners, including Imperial College London, the D-risk project has been running a survey throughout January and February this year. The survey is a call to road users, including cyclists and pedestrian­s, to submit examples of edge cases they’ve experience­d or witnessed to feed the developmen­t of autonomous vehicle AI.

The call for submission­s went out on social media, and the wackier the better.

Autonomous vehicles must be capable of distinguis­hing between a picture of a cow on the back of a bus and the real thing.

One driver told of how they swerved to avoid a cow but then nearly hit a table in the fast lane. It’s that combinatio­n of events that might not necessaril­y be unusual on their own – such as an animal in the carriagewa­y or something that has fallen off the back of a lorry – that might confuse an untrained machine were they to happen simultaneo­usly.

Once the data is compiled, it will be cleaned up both manually and by computer. Mischievou­s made-up scenarios created by overactive imaginatio­ns won’t harm the data and could even contribute to it, so long as they’re deemed plausible. Once the survey is complete, the data will be used to develop AI that will power a virtual driving test for autonomous vehicles to ensure they’re safe enough to drive solo on public roads.

 ??  ??

Newspapers in English

Newspapers from United Kingdom