GOING DEEPER
DLSS is getting smarter—what does that mean for local computational power?
It’s been around for long enough now that we take it for granted. A new game comes out, dripping in graphics, and occasionally when we run it at 4K it dips to an unplayable 58fps. So we go into the video options, turn the ‘DLSS’ option on, and get back on with walking through neon-lit streets in our game.
But DLSS is worth paying closer attention to. This tech is indistinguishable from magic. It’s upscaling our games in real time, at the point that the frame is being rendered by the GPU. It’s able not just to take a 1080p image and produce a very convincing 4K version that doesn’t eat up resources, but also to interpolate between rendered frames for a smoother visual experience and a higher overall framerate. Without it, developers wouldn’t be able to push ray tracing as far, or Unreal Engine 5’s impressive features like Nanite and Lumen. And its pace of evolution is fearsome.
Since its earliest implementations in Battlefield V and Metro Exodus in 2019 as DLSS 1.0, it’s had quite the glow up. The benefits weren’t that obvious then—it had to be trained per game, loading up countless frames in a supercomputer and learning what each frame should look like at a higher resolution. The results had artifacts in them and the performance boost was modest. There was no control over how much DLSS you were applying, just an on/off toggle.
DLSS unofficially updated to a 1.9 release a few months later in August 2019, and by the time Remedy’s utterly superlative Control came out (go and play Control immediately) it had been adapted to make use of the CUDA cores on GeForce RTX 2000 architecture instead of just the dedicated Tensor cores which take information from the Nvidia deep-learning supercomputer and interpret it.
DLSS 2.0 followed in April 2020, and this time the big progress was a form of TAAU that had been trained generically rather than per-game. In other words, DLSS 2.0 knew how to smooth out any and all jaggies in any image. It didn’t need the supercomputer to study a specific game over trillions of frames first. What it meant for us on ground level was fewer artifacts, noticeably better fidelity and more granularity over how we applied it in our games.
LEFT OUT IN THE COLD
In 2022 we saw version 3.0, and a year after we were treated to 3.5. With the not insignificant caveat that older RTX cards were left out in the cold and only the insanely costly RTX
4000 boards received the new goodies, this is where DLSS started to get properly exciting. Also a bit spooky. It wasn’t just upscaling frames any more, or doing anti-aliasing grunt work, but generating extra frames, seemingly for free, without anything like the same hit to fidelity that previous versions created. DLSS 3.0 introduced optical flow interpolation, a
technique that takes two rendered frames from the pipeline, interpolates the differences between them, then renders more frames in the middle. That’s how it’s able to send your FPS skyrocketing like a dodgy cryptocurrency.
Version 3.5’s advances target ray tracing, and uses five times more training data than DLSS 3.0 to achieve a denoising effect on ray-traced images via ray reconstruction. It’s a significant cog in the engine of algorithms that makes path-tracing possible in Cyberpunk 2077: Phantom Liberty. And when you look at the forgettable effect it had on the way Battlefield V looked and place that next to Phantom Liberty five years later, you see what a big deal DLSS has been. For Nvidia, for the game industry, and for everyone with an Nvidia GPU.
Because during a five-year period characterized by semiconductor shortages, poor die yields and a constant struggle to achieve more computational power through die shrinks, it’s been this largely offline, deep learning AI-based process where we’ve seen the great leaps forwards.
It’s a bit trite to say that AI’s the future at this point. AI was the future in 2021. Today, we’re already bored of hearing about all the ways it’s changing every molecule of our familiar lives. With that said—the fact that DLSS has achieved what it has paints a curious picture of the future for PC graphics. What’s the point of buying these expensive slabs of silicon if they’re increasingly going to be used simply to interpret the work a supercomputer did somewhere in a data center?
And AI already has the answer to this: it’s designing the next generations of graphics cards. Nvidia’s chief scientist and senior vice president of research Bill Dally explained in 2022 that the company is using AI to optimize new RTX cards, including streamlining processes and improving energy efficiency. So AI’s building better cards that will use AI to render stuff for you.
Ultimately software can’t replace hardware. However smart the deep learning supercomputers are, we’ll need something that deciphers the huge amount of information they provide. That’s why DLSS isn’t a system-wide technique.
So AI won’t replace graphics cards. But it’s already changed how they’re designed, and what their purpose actually is. No longer an engine room of local rendering, modern GPUs are part of a pipeline that relies increasingly in offline rendering work, to deliver eerily detailed frames to our monitors at speeds that simply haven’t been possible by local rendering. Now if only it could draw hands properly…
AI’S BUILDING BETTER CARDS THAT WILL USE AI TO RENDER STUFF FOR YOU