RAY OF HOPE

Nvidia’s new RTX tech­nol­ogy, ex­plained

PC GAMER (US) - - TECH REPORT -

For as long as videogames have ex­isted on a 3D plane, we’ve had those pesky CG movies and their su­pe­rior vi­su­als lord­ing it over our real-time vir­tual worlds. The ray trac­ing ren­der­ing tech­nique has played a defin­ing role in that dis­crep­ancy—it’s one of the best meth­ods of sim­u­lat­ing light be­hav­ior around but, prior to Nvidia’s RTX graph­ics card re­veal, it sim­ply hasn’t been pos­si­ble to achieve in real time. Ray trac­ing’s been around for decades in movies, be­cause movie CG takes its sweet time to ren­der. At in­dus­try-lead­ing ef­fects houses such as In­dus­trial Light & Magic, a 30-sec­ond scene might take three weeks to bake, while your poor old PC has to do ev­ery­thing—ob­jects, sur­faces, shad­ows, dif­fu­sion, re­flec­tions—be­fore your mon­i­tor asks for the next frame. In the sim­plest terms pos­si­ble, the more time af­forded to a ren­derer, the more com­plex math­e­mat­ics it can run to achieve a more re­al­is­tic scene.

So when Nvidia an­nounced that its new Tur­ing GPUs were ca­pa­ble of run­ning this im­pos­si­ble clus­ter­fluff of arith­metic in real time, the graph­ics gi­ant did so know­ing it had just bridged the gap be­tween pre­baked CGI and gam­ing graph­ics. It used the term ‘Holy Grail’ a lot dur­ing the 20xx se­ries card re­veal, and you can un­der­stand why. De­vel­op­ers have been promis­ing movie-level graph­i­cal fi­delity in videogames for years now, most no­tably CD Projekt RED, whose mag­num opus TheWitcher3… um, didn’t ex­actly ful­fil that prom­ise. You look won­der­ful, Ger­alt dear, but you’re not ex­actly mak­ing gamers rub their eyes in car­toon-like dis­be­lief that what they’re see­ing isn’t re­al­ity.

The light fan­tas­tic

What ray trac­ing does dif­fer­ently starts with the vir­tual cam­era. In some en­gines it looks like a ten­nis ball on a stick, but it’s prob­a­bly eas­ier to imag­ine it as… well, a cam­era. One that can be po­si­tioned any­where in the 3D plane, telling the ren­derer what’s in view and what isn’t—and by ex­ten­sion, what doesn’t need to be ren­dered at a given time. The de­vel­oper po­si­tions it, de­fines its field of view and point of fo­cus, and us­ing those num­bers the ren­derer knows ex­actly how much hard work it needs to put into real­iz­ing a scene, and where it needs to fo­cus its at­ten­tion.

In old-fash­ioned real-time 3D ren­der­ing, much of the scene is pre­baked to take the heavy lift­ing away from the GPU. Shader mod­els and tex­tures are all try­ing to sim­u­late the ap­pear­ance of real-life light­ing, but they’re not sim­u­lat­ing the process. They’re just do­ing all they can to repli­cate the end prod­uct. It’s an artis­tic ap­proach to the prob­lem of de­pict­ing light be­hav­ior, not a sci­en­tific one. Un­til re­cently, we just haven’t had the tech­nol­ogy to model the process in a sci­en­tific way in real time.

EA’s up­com­ing WW2 mul­ti­player block­buster is among the first to sup­port the new tech.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.