Deep driv­ing

Jamaica Gleaner - - SPORTS - WRIT­TEN AND COM­PILED BY Ka­reem LaTouche – TNS

A revo­lu­tion­ary AI tech­nique is about to trans­form the self-driv­ing car.

WHEN THE Google self-driv­ing­car project be­gan about a decade ago, the com­pany made a strate­gic de­ci­sion to build its tech­nol­ogy on ex­pen­sive li­dar and de­tailed map­ping. Even to­day, Google’s self-driv­ing tech­nol­ogy still re­lies on those two pil­lars. While that ap­proach is great up to a point—we have good al­go­rithms for us­ing li­dar and cam­era data to lo­calise a car on the map—it’s still not good enough. Driv­ing on com­pli­cated, ever-chang­ing streets in­volves per­cep­tion and de­ci­sion-mak­ing skills that are in­her­ently un­cer­tain.

Now an ar­ti­fi­cial-in­tel­li­gence tech­nol­ogy called deep learn­ing is be­ing used to ad­dress the prob­lem. Rather than us­ing the old method of hand-coded al­go­rithms, we can now use sys­tems that pro­gram them­selves by learn­ing from ex­am­ples of how a sys­tem ought to be­have in re­sponse to an in­put. Deep learn­ing is now the best ap­proach to most per­cep­tion tasks, as well as to many low-level con­trol tasks.


A self-driv­ing car needs a per­cep­tion sys­tem to sense things that are mov­ing (cars, peo­ple) as well as things that aren’t (lamp posts, curbs). Self-driv­ing ve­hi­cles de­tect dy­namic ob­jects us­ing sen­sors such as cam­eras, laser scan­ners and radar. Of these three, cam­eras are the cheap­est, but they’re also used the least be­cause it’s hard to trans­late images into de­tected ob­jects. Us­ing deep learn­ing, we’re see­ing dra­matic im­prove­ments in the car’s abil­ity to un­der­stand and make use of such images.

We’re also see­ing sig­nif­i­cant gains from some­thing called ‘mul­ti­task deep learn­ing,’ in which a sys­tem trained si­mul­ta­ne­ously to de­tect lane mark­ings, cars and pedes­tri­ans does bet­ter than three sep­a­rate sys­tems trained in iso­la­tion—since the sin­gle net­work can share in­for­ma­tion among the sep­a­rate tasks.

In­stead of re­ly­ing en­tirely on a pre­com­puted map, the car can use the map as one of many data streams, com­bin­ing it with sen­sor in­puts to help it make de­ci­sions. (A neu­ral net­work that knows from map data where cross­walks are, for ex­am­ple, can more ac­cu­rately de­tect pedes­tri­ans try­ing to cross than one that re­lies solely on images.)

Deep learn­ing can also al­le­vi­ate one of the big­gest is­sues iden­ti­fied by many who have rid­den in a self-driv­ing car—a ‘jerky’ feel to the driv­ing style, which some­times leads to mo­tion sick­ness. But a car trained us­ing ex­am­ples of hu­mans driv­ing can of­fer a ride that feels more nat­u­ral.

It’s still early. But just as deep learn­ing did with im­age search and voice recog­ni­tion, it is likely to for­ever change the course of self-driv­ing cars.

Newspapers in English

Newspapers from Jamaica

© PressReader. All rights reserved.