NVIDIA AND ME: PASCAL, VR, AND DEEP LEARN­ING

HWM (Malaysia) - - SPECIAL -

92 HWM | M AY 2 0 1 6 This year’s NVIDIA GTC (GPU Tech Con­fer­ence) at San Jose, Cal­i­for­nia was marked with a cor­nu­copia of an­nounce­ments and pre­sen­ta­tions that re­volved around the ap­pli­ca­tion of GPU tech­nol­ogy in Vir­tual Re­al­ity (VR), Deep Learn­ing, and of course, (it just wouldn’t be an NVIDIA event with­out a men­tion of it), Self-Driv­ing Au­ton­o­mous Ve­hi­cles.

And be­cause this is es­sen­tially the year where everyone and ev­ery­thing is go­ing VR, NVIDIA spared no ex­penses in giv­ing all at­ten­dees a chance to ex­pe­ri­ence the var­i­ous VR demos on the show floor, which we’ll get to mo­men­tar­ily.

Tra­di­tion­ally, any and all an­nounce­ments of new prod­ucts, projects, and fu­ture en­deav­ors were usu­ally made dur­ing GTC (at least, since NVIDIA be­gan host­ing its own con­fer­ence at its home base).

While there was a lot to get ex­cited about this year, al­most ev­ery tech jour­nal­ist at GTC 2016 were wait­ing for just one thing from NVIDIA CEO Jen-Hsun Huang, and that was the an­nounce­ment and men­tion of just one name: Pascal.

The GPU it­self has long been in the works since it was first an­nounced back when we at­tended GTC 2014. Com­pared to the Maxwell ar­chi­tec­ture (which NVIDIA has re­port­edly al­ready dis­con­tin­ued), it goes with­out say­ing that Pascal is ex­po­nen­tially more pow­er­ful and ef­fi­cient com­pared to its pre­de­ces­sor. It uti­lizes a brand new High Band­width Mem­ory for­mat known as HBM2, and as promised be­fore, it will come with sup­port for the new NVLink fea­ture, which al­lows mul­ti­ple GPUs to link up with each other on a sin­gle sys­tem with­out any lim­i­ta­tions in per­for­mance.

But Pascal’s an­nounce­ment was met with both a mix of laugh­ter, cheer, and sheep­ish dis­ap­proval when Jen-Hsun un­veiled the ark in which NVIDIA’s new GPU ar­chi­tec­ture would be car­ried upon, and that, dear folks, is the Tesla P100. Yes, we know: It wasn’t a new GeForce card.

Armed with 16GB of HBM2 that was built on a brand new 16nm FinFET process, 720GB/s of mem­ory band­width, and as many as 3,584 FP32 CUDA cores (for sin­gle pre­ci­sion), and half of that num­ber for dou­ble pre­ci­sion CUDA cores, the Tesla P100 is NVIDIA’s, and by all counts, the world’s most pow­er­ful GPU in ex­is­tence.

The Tesla P100 will be used in­side a Deep Learn­ing su­per­com­puter known as the DGX-1.

The DGX-1 su­per­com­puter, in the flesh.

As al­ways, the first key­note of GTC 2016 was by Jen-Hsun Huang, CEO of NVIDIA.

Tesla P100, the first Pascal-pow­ered GPU.

Newspapers in English

Newspapers from Malaysia

© PressReader. All rights reserved.