THE GRAPHICS OF TOMORROW
AMD looks to the mid-range; Nvidia still holds the high ground
2018 MAY HAVE SEEN the advent of ray-tracing for Nvidia, but Turing actually left a lot to be desired. The full-fat architecture provided a solid 30 percent performance increase, but as it cost over 70 percent more—thanks to the introduction of dedicated ray-tracing and DLSS hardware—that boost felt more than a little lackluster. New architecture designs and implementations always come with an increased cost to manufacture, but this was hard to swallow, especially with no titles upon which to test those features.
But all is not lost. Competition drives down prices, motivates innovation, and brings a better solution for all of us. It’s all down to AMD. Navi is the name of the game, and we should be expecting this some time in the second half of 2019. This 7nm GPU will be manufactured by TSMC, the same company pumping out AMD’s Zen 2 chips. We don’t know a huge amount about Navi just yet, other than rumors circulating that it’s being positioned as a mid-range GPU right now, as opposed to anything capable of taking on the high end. This is interesting, because if it does pack a big enough punch at the right price in the mid-range, it may very well pull down the pricing of Nvidia’s other offerings at the top, as it reshuffles its entire arsenal to compensate. And as we know from experience, it’s the mid-range cards that hold the most clout when it comes to profit and market share.
That said, even if AMD doesn’t make a huge amount of bank in the world of the PC enthusiast, its Radeon brand is assuredly safe, thanks to its immense dominance in the world of console computing, and, boy, is that a big market to control. Does that mean this is the end of AMD at the high end? Probably. Can we expect Nvidia prices to keep climbing? Well, unless another manufacturer comes out with a complete curve ball in terms of how we render in-game graphics, it’s looking likely that its dominance in this market is assured.
NVIDIA’S MID-RANGE STAKE
That said, we’re still expecting a few releases from Nvidia some time this year: a Turing (or rebadged Pascal) variant of the RTX 2060, and we’re seemingly still missing a Titan of some description, too, although we’d be surprised if that wasn’t any less than $1,900, for one extra GB of GDDR6, a few CUDA cores, and some developer features. The 2060 is also of particular interest, and it’ll mostly come down to the nomenclature Nvidia decides to use. If it’s a GTX card, and Nvidia forsakes the RTX hardware (which would make the most sense, given its lackluster performance, even at 1080p, on the 2070), then it could be quite the impressive value card.
Spec-wise, we’d expect around 1,920 CUDA cores, and potentially either 8GB or 6GB of GDDR5X, as opposed to GDDR6. That should then perform quite nicely at around 60fps at 1440p. Although this is, of course, all speculative.
We’ve already seen some preliminary figures leaked online, most notably in FinalFantasyXV’s benchmark, where a supposed RTX 2060 scored 2,589 points at 4K; in comparison, the GTX 1060 scored 1,985 points total. As expected, that’s an increase of around 30 percent in performance.
Although this hasn’t been confirmed officially, the card also retained its RTX nomenclature. If we did see RTX hardware make the cut, it’d likely be 24 RT cores and 192 Tensor cores—a little low for true ray-tracing or DLSS support. DLSS is arguably going to be more important at 1080p gaming than ray-tracing, but even so, there’s still no word on price.
Could we see an RTX 1060 without the RTX heavyweight hardware? A full-fat Turing die.