System News
Will we finally get the ray tracing future we’ve desired for so long? Mark Williams ponders the RTX question.
Rasterization has been with us since seemingly time immemorial for 3D game scene rendering, although ray tracing has been around for almost as long, just not for real time ray-tracing as it is exceptionally compute intensive. So much so, that only the likes of Pixar in the past with super computers could render movie quality scenes using the technique.
It wasn’t until 2008 when we saw Intel’s project Larrabee demonstrated, running Quake 4 with ray-traced lighting in real-time that consumers had a taste of possibly getting ray-tracing into their humble home PCs. Unfortunately, Larrabee never made it to mainstream consumers and the hopes and dreams of ray-tracing seemingly disappeared with it for another decade.
Fast forward to today, and Nvidia with its new 2000-series graphics cards is so confident of a ray-tracing future that it renamed its venerable GTX brand to RTX, the R standing for raytracing, of course. Quite a bit of silicon in Nvidia’s new Turing architecture is dedicated solely for ray-tracing (RT) work. With RT cores embedded into each SM of the architecture, Turing really is designed from the ground up to support accelerating this function in hardware.
Nvidia’s ray-tracing technique as this stage appears to be a proprietary implementation though, with no announcement of an open API, framework or standard for the ray-tracing tech used in the RTX line-up. It’ll be interesting to see if AMD can follow suit and make a compatible offering or require its own code path for 3D engines to use. We could see a new front open in the GPU wars if AMD needs to forge its own path. And who knows what Intel will bring with its coming discrete card offering in 2020.
Then there’s the big performance question. Just like any new graphics technology that requires new hardware to use (like unified shaders or PhysX), the first hardware implementation is usually just enough to let game developers utilise and try it out to implement something, but not typically to also run it competently fast as well. When the second iteration of the hardware materialises is when the new technology really starts to take off and becomes more than just a novelty.
With any luck, raytracing this time around will be more than just a novelty and will run at a good enough speed to help early adopter uptake and kickstart the ray-tracing revolution GPU makers have been salivating over for so very long.
Ignoring all the ray-tracing talk for a moment, Nvidia has been very cagey and careful to date in how it presents the new RTX 2000 series performance levels relative to the GTX 1000 series. Nvidia did not released any apples to apples game benchmarks at the Turing launch, just vague bar graphs with no scaling or numbers on them. Claims of anywhere between 13% to 50% faster were circulated prior to the media embargo.
But our independent performance tests and Chris’ full review is here in this issue for you to soak up the numbers, and his insightful opinion. Nvidia has at least improved the bread-and-butter rasterization performance enough to make the 2000-series worthy in its own right for gamers and enthusiasts over the 1000 series. All the ray-tracing and AI compute stuff that’s also built in can then be the cream on top to foster early adoption of all this new tech and bring games into the next generation.