PC GAMER (US)

RAY OF HOPE

Nvidia’s new RTX technology, explained

-

For as long as videogames have existed on a 3D plane, we’ve had those pesky CG movies and their superior visuals lording it over our real-time virtual worlds. The ray tracing rendering technique has played a defining role in that discrepanc­y—it’s one of the best methods of simulating light behavior around but, prior to Nvidia’s RTX graphics card reveal, it simply hasn’t been possible to achieve in real time. Ray tracing’s been around for decades in movies, because movie CG takes its sweet time to render. At industry-leading effects houses such as Industrial Light & Magic, a 30-second scene might take three weeks to bake, while your poor old PC has to do everything—objects, surfaces, shadows, diffusion, reflection­s—before your monitor asks for the next frame. In the simplest terms possible, the more time afforded to a renderer, the more complex mathematic­s it can run to achieve a more realistic scene.

So when Nvidia announced that its new Turing GPUs were capable of running this impossible clusterflu­ff of arithmetic in real time, the graphics giant did so knowing it had just bridged the gap between prebaked CGI and gaming graphics. It used the term ‘Holy Grail’ a lot during the 20xx series card reveal, and you can understand why. Developers have been promising movie-level graphical fidelity in videogames for years now, most notably CD Projekt RED, whose magnum opus TheWitcher­3… um, didn’t exactly fulfil that promise. You look wonderful, Geralt dear, but you’re not exactly making gamers rub their eyes in cartoon-like disbelief that what they’re seeing isn’t reality.

The light fantastic

What ray tracing does differentl­y starts with the virtual camera. In some engines it looks like a tennis ball on a stick, but it’s probably easier to imagine it as… well, a camera. One that can be positioned anywhere in the 3D plane, telling the renderer what’s in view and what isn’t—and by extension, what doesn’t need to be rendered at a given time. The developer positions it, defines its field of view and point of focus, and using those numbers the renderer knows exactly how much hard work it needs to put into realizing a scene, and where it needs to focus its attention.

In old-fashioned real-time 3D rendering, much of the scene is prebaked to take the heavy lifting away from the GPU. Shader models and textures are all trying to simulate the appearance of real-life lighting, but they’re not simulating the process. They’re just doing all they can to replicate the end product. It’s an artistic approach to the problem of depicting light behavior, not a scientific one. Until recently, we just haven’t had the technology to model the process in a scientific way in real time.

 ??  ?? EA’s upcoming WW2 multiplaye­r blockbuste­r is among the first to support the new tech.
EA’s upcoming WW2 multiplaye­r blockbuste­r is among the first to support the new tech.

Newspapers in English

Newspapers from United States