The Denver Post - - TECH KNOW - By Brian Fung

S elf-driv­ing cars. They’re the fu­ture of trans­porta­tion — and they’re get­ting smarter all the time. Thanks to ad­vances in soft­ware and ar­ti­fi­cial in­tel­li­gence, these ma­chines are now able to dis­tin­guish be­tween cars and cy­clists, or be­tween pedes­tri­ans and your pet.

Many can now “see” just like you can, pick­ing out ob­jects and ob­sta­cles ap­proach­ing ahead. All that tech could even­tu­ally save lives, help­ing to pre­vent the 95 per­cent of car ac­ci­dents that safety reg­u­la­tors es­ti­mate are caused by hu­man er­ror each year.

But none of this would be pos­si­ble with­out a piece of hard­ware many of us take for granted in our own home com­put­ers. It’s a tech­nol­ogy that traces back to the ear­li­est days of modern per­sonal com­put­ing, one that peo­ple tend to as­so­ciate more with “World of War­craft” than new­fan­gled wid­gets on wheels.

We’re talk­ing about the graph­ics pro­ces­sor.

In main­stream PCs, the graph­ics pro­ces­sor — of­ten found on a graph­ics card — is what al­lows com­put­ers to draw all those pix­els and poly­gons that make up to­day’s pho­to­re­al­is­tic video games. But as these pro­ces­sors have grown ever more pow­er­ful, en­gi­neers have dis­cov­ered their util­ity in all sorts of nongam­ing ap­pli­ca­tions. Graph­ics pro­cess­ing units — or GPUs — have tran­scended their ori­gins to be­come en­tire com­put­ers in their own right.

“(The GPU) is now pow­er­ing ev­ery­thing from games to the vis­ual ef­fects you see in Hol­ly­wood films,” said Danny Shapiro, the se­nior di­rec­tor of au­to­mo­tive at Nvidia, which ac­counts for roughly 75 per­cent of the $7.8 bil­lion mar­ket for GPUs. GPUs, said Shapiro, are cen­tral to “pro­fes­sional graph­ics, for au­tomak­ers that are de­sign­ing cars, to doc­tors and re­searchers that are search­ing for cures for can­cer and us­ing med­i­cal imag­ing tech­niques.”

It’s a sign of how big the GPU busi­ness has grown that some 200 other com­pa­nies work with Nvidia’s au­to­mo­tive unit alone. GPUs are even part of the brains be­hind ar­ti­fi­cial in­tel­li­gence, ap­pear­ing in tech­nolo­gies like the Ama­zon Echo, which con­verts nat­u­ral hu­man speech into data that ma­chines can un­der­stand.

(Ama­zon chief ex­ec­u­tive Jef­frey P. Be­zos also owns The Wash­ing­ton Post.)

“The com­bi­na­tion of GPUs and a CPU are now avail­able that can ac­cel­er­ate an­a­lyt­ics, deep learn­ing, high-per­for­mance com­put­ing, and sci­en­tific sim­u­la­tions,” Chris Niven, re­search di­rec­tor for oil and gas is­sues at the re­search firm IDC, told ZDNet last month.

To un­der­stand why GPUs have be­come so preva­lent in next-gen­er­a­tion tech­nolo­gies, we have to talk about how they work.

Tra­di­tion­ally, the brain in most PCs has been the CPU, or the cen­tral pro­cess­ing unit. These chips are made by com­pa­nies such as In­tel. Ap­ple has also been mak­ing its own, pro­pri­etary chips for the iPad and iPhone. The dis­tin­guish­ing fea­ture of this tech­nol­ogy is that it’s de­signed to run cal­cu­la­tions se­ri­ally, one af­ter an­other, very quickly. The rise of dual- and quad-core CPUs have ex­panded their ca­pa­bil­i­ties, al­low­ing for more com­pu­ta­tions to oc­cur si­mul­ta­ne­ously.

These chips are still ideal for ma­chines that only need to run a few pro­cesses at the same time. But when it comes to tech­nol­ogy like self-driv­ing cars, where the com­put­ers are con­stantly re­ceiv­ing and di­gest­ing in­for­ma­tion, mul­ti­task­ing be­comes that much more im­por­tant. And that’s where GPUs ex­cel.

Com­puter re­searchers be­gan to dis­cover the po­ten­tial be­hind GPUs as far back as the late 1990s, when the mar­ket was awash with dozens of com­pet­ing chip mak­ers. Their prod­ucts found their way into desk­top PCs and gam­ing con­soles like the Sega Dream­cast and Xbox, en­abling con­sumers to ex­pe­ri­ence ground­break­ing ti­tles like “Half-Life,” “Quake” and “Halo.” By si­mul­ta­ne­ously and ef­fi­ciently con­trol­ling the gen­er­a­tion of shapes on a screen, GPUs helped bring first vec­tor graph­ics, and then in­di­vid­ual pix­els, to life.

By the early 2000s, GPUs were be­ing pit di­rectly against CPUs in com­put­ing tests, with some results show­ing enor­mous prom­ise for graph­ics pro­ces­sors.

“Re­searchers at uni­ver­si­ties re­al­ized that, ‘Hey, here is this low-cost pro­ces­sor that we can ap­ply to sci­en­tific and math­e­mat­i­cal ap­pli­ca­tions and get some ac­cel­er­a­tion for cheap,’” said Jon Ped­die, pres­i­dent of Jon Ped­die Re­search, an in­dus­try anal­y­sis firm.

One pa­per in 2002 found that com­pared to CPUs, “the graph­ics hard­ware al­lows us to es­tab­lish a high-speed cus­tom data pro­cess­ing pipe­line. Once the pipe­line is set up, data can be streamed through with dev­as­tat­ing ef­fi­ciency.”

The best GPUs on the mar­ket to­day come with as many as 5,000 cores, said Ped­die, not just two or four or eight as with CPUs. While CPUs can process smaller amounts of in­for­ma­tion very quickly, the ad­van­tage of GPUs has to do with scale — pro­cess­ing lots of in­for­ma­tion at the same time.

This is why self-driv­ing cars find GPUs so use­ful. Through the use of op­ti­cal cam­eras, laser and radar sen­sors, cars look at their sur­round­ings by taking many mea­sure­ments per sec­ond.

“It’s 30 pic­tures ev­ery sec­ond,” Shapiro said. “Each pic­ture, a sin­gle frame, is made up of pix­els. Each of these pix­els or dots is a nu­mer­i­cal value that says, ‘What is the color of the light there?’ It’s just a bunch of num­bers.”

Eric Ris­berg, As­so­ci­ated Press file

An Uber driver­less car waits in traf­fic dur­ing a test drive in San Fran­cisco.

Alexan­der Ko­erner, Getty Images

A driver presents a Cruis­ing Chauf­feur, a hands-free, self-driv­ing sys­tem de­signed for mo­tor­ways, dur­ing a me­dia event show­cas­ing new au­to­mo­tive tech­nolo­gies on June 20 in Hanover, Ger­many.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.