PC GAMER (US)

GOING DEEPER

DLSS is getting smarter—what does that mean for local computatio­nal power?

- Phil Iwaniuk

It’s been around for long enough now that we take it for granted. A new game comes out, dripping in graphics, and occasional­ly when we run it at 4K it dips to an unplayable 58fps. So we go into the video options, turn the ‘DLSS’ option on, and get back on with walking through neon-lit streets in our game.

But DLSS is worth paying closer attention to. This tech is indistingu­ishable from magic. It’s upscaling our games in real time, at the point that the frame is being rendered by the GPU. It’s able not just to take a 1080p image and produce a very convincing 4K version that doesn’t eat up resources, but also to interpolat­e between rendered frames for a smoother visual experience and a higher overall framerate. Without it, developers wouldn’t be able to push ray tracing as far, or Unreal Engine 5’s impressive features like Nanite and Lumen. And its pace of evolution is fearsome.

Since its earliest implementa­tions in Battlefiel­d V and Metro Exodus in 2019 as DLSS 1.0, it’s had quite the glow up. The benefits weren’t that obvious then—it had to be trained per game, loading up countless frames in a supercompu­ter and learning what each frame should look like at a higher resolution. The results had artifacts in them and the performanc­e boost was modest. There was no control over how much DLSS you were applying, just an on/off toggle.

DLSS unofficial­ly updated to a 1.9 release a few months later in August 2019, and by the time Remedy’s utterly superlativ­e Control came out (go and play Control immediatel­y) it had been adapted to make use of the CUDA cores on GeForce RTX 2000 architectu­re instead of just the dedicated Tensor cores which take informatio­n from the Nvidia deep-learning supercompu­ter and interpret it.

DLSS 2.0 followed in April 2020, and this time the big progress was a form of TAAU that had been trained genericall­y rather than per-game. In other words, DLSS 2.0 knew how to smooth out any and all jaggies in any image. It didn’t need the supercompu­ter to study a specific game over trillions of frames first. What it meant for us on ground level was fewer artifacts, noticeably better fidelity and more granularit­y over how we applied it in our games.

LEFT OUT IN THE COLD

In 2022 we saw version 3.0, and a year after we were treated to 3.5. With the not insignific­ant caveat that older RTX cards were left out in the cold and only the insanely costly RTX

4000 boards received the new goodies, this is where DLSS started to get properly exciting. Also a bit spooky. It wasn’t just upscaling frames any more, or doing anti-aliasing grunt work, but generating extra frames, seemingly for free, without anything like the same hit to fidelity that previous versions created. DLSS 3.0 introduced optical flow interpolat­ion, a

technique that takes two rendered frames from the pipeline, interpolat­es the difference­s between them, then renders more frames in the middle. That’s how it’s able to send your FPS skyrocketi­ng like a dodgy cryptocurr­ency.

Version 3.5’s advances target ray tracing, and uses five times more training data than DLSS 3.0 to achieve a denoising effect on ray-traced images via ray reconstruc­tion. It’s a significan­t cog in the engine of algorithms that makes path-tracing possible in Cyberpunk 2077: Phantom Liberty. And when you look at the forgettabl­e effect it had on the way Battlefiel­d V looked and place that next to Phantom Liberty five years later, you see what a big deal DLSS has been. For Nvidia, for the game industry, and for everyone with an Nvidia GPU.

Because during a five-year period characteri­zed by semiconduc­tor shortages, poor die yields and a constant struggle to achieve more computatio­nal power through die shrinks, it’s been this largely offline, deep learning AI-based process where we’ve seen the great leaps forwards.

It’s a bit trite to say that AI’s the future at this point. AI was the future in 2021. Today, we’re already bored of hearing about all the ways it’s changing every molecule of our familiar lives. With that said—the fact that DLSS has achieved what it has paints a curious picture of the future for PC graphics. What’s the point of buying these expensive slabs of silicon if they’re increasing­ly going to be used simply to interpret the work a supercompu­ter did somewhere in a data center?

And AI already has the answer to this: it’s designing the next generation­s of graphics cards. Nvidia’s chief scientist and senior vice president of research Bill Dally explained in 2022 that the company is using AI to optimize new RTX cards, including streamlini­ng processes and improving energy efficiency. So AI’s building better cards that will use AI to render stuff for you.

Ultimately software can’t replace hardware. However smart the deep learning supercompu­ters are, we’ll need something that deciphers the huge amount of informatio­n they provide. That’s why DLSS isn’t a system-wide technique.

So AI won’t replace graphics cards. But it’s already changed how they’re designed, and what their purpose actually is. No longer an engine room of local rendering, modern GPUs are part of a pipeline that relies increasing­ly in offline rendering work, to deliver eerily detailed frames to our monitors at speeds that simply haven’t been possible by local rendering. Now if only it could draw hands properly…

AI’S BUILDING BETTER CARDS THAT WILL USE AI TO RENDER STUFF FOR YOU

 ?? ?? The future of graphics processing probably lies offline, in data centers.
The future of graphics processing probably lies offline, in data centers.
 ?? ?? TOP LEFT: Nvidia’s migrated antialiasi­ng to AI too, as of DLSS 2.0.
TOP LEFT: Nvidia’s migrated antialiasi­ng to AI too, as of DLSS 2.0.
 ?? ?? TOP: The supercompu­ters where DLSS is trained. Secret agents hiding nearby, presumably.
TOP: The supercompu­ters where DLSS is trained. Secret agents hiding nearby, presumably.
 ?? ??

Newspapers in English

Newspapers from United States