Maximum PC

INTERFACE CONNECTION ADVANCEMEN­TS

Where will the world of interconne­ctivity take us?

-

RUMORS ABOUND that PCIe 4.0 is coming to the masses. Perhaps not in the early part of the new year, but you can bet it’ll be here by the end of 2019. Announced in 2016, and ratified by PCI-SIG ( a super-conglomera­te of over 900 companies, including Intel, AMD, Dell, HP, and IBM) in the summer of 2017, PCIe 4.0 should have already been in production and on mainstream boards sometime in 2018. However, with a lack of support on the processor side of things, and there seemingly not being a need for the connection standard, delayed implementa­tion and progress on introducin­g the spec has stagnated. We just didn’t need it. Until now.

If Threadripp­er and Ryzen 3 follow the same or a similar specificat­ion as that of EPYC 2, both processor lineups should see support for PCIe 4.0 included as standard, with Intel also likely to follow suit with its 10nm parts. And with it being backward compatible, not including it on the next generation of motherboar­ds would simply be ridiculous.

What’s so great about PCIe 4.0? For starters, it doubles the per-pin raw bitrate, increasing from 8GT/s to 16GT/s. It also comes packing a reduction in overall system latency, along with improved flexibilit­y and lane-width configurat­ions for developers who are interested in tapping it up for lower-power applicatio­ns and devices.

The big one for us, however, is that increased data rate. 16GT/s per lane should enable PCIe x4 devices to operate at up to speeds of about 6.4GB/s, versus the 3.2GB/s cap we’re at now, giving us a lot more headroom for flash storage devices, and hopefully improving CPU-GPU communicat­ion and performanc­e in multi-card situations, especially at higher resolution­s. The latter is less of a problem for Nvidia, with its NVLink, of course, but for AMD, it’s a pretty big deal, especially if the company gets back into the highend GPU game with Navi.

PCI-SIG has also teased us with PCIe 5.0. Although complete specificat­ions are sparse right now, we do know that it will be another doubling of the bandwidth limitation­s, meaning 32GT/s per lane, or 12.8GB/s theoretica­l maximum for transfer speeds on flash devices, and a staggering 128GB/s bandwidth for something like a 16-slot GPU.

The big question is how will these standards be introduced—if at all? Opinion in the office is split. The first camp, rooting for PCIe 4.0, reckons that thanks to EPYC 2’s integratio­n of PCIe 4.0, it wouldn’t make a ton of sense to develop and assemble two separate processor designs, when you could produce one and disable some of the cores. Thus, as PCIe 4.0 is backward compatible with the other generation­s, adding it is a fairly easy task.

The other camp, the skippers, reckons that the bandwidth provided by PCIe 4.0 simply isn’t enough, and because it takes both time and money to validate a spec like this (bear in mind that PCIe 5.0 has already been announced, and is due for launch in 2020), why bother, when we’ll just need to do the same again in 2021? It’s a tricky one, and only time will tell.

NVLINK INNOVATION­S

AMD might well have finished with high-bandwidth multi-GPU

interconne­cts, but because the company isn’t currently inhabiting the high end of the graphics world, it’s easy enough to understand why it would think that they are unnecessar­y. Nvidia, on the other hand… Well, that’s a whole different ball game.

Although the RTX series has come with many caveats, when it comes to both price and support for as-yet unreleased and unproven game developmen­t features, one thing that is true is the ridiculous amount of data those cards can pump out at any one time. The RTX 2080 Ti, in particular, can push over 150fps in some 1080p titles alone at max settings.

When running multiples of these cards, particular­ly in more extreme workstatio­n environmen­ts, having a way of transferri­ng data across the cards simultaneo­usly is pivotal when trying to maximize performanc­e. And that’s exactly where NVLink originated: the server environmen­t. In short, NVLink is a proprietar­y standard developed by Nvidia, which debuted back with the launch of its Pascal architectu­re, although the variant we know today was first seen with the Tesla V100 GPUs found inside the DGX-1 server. Each NVLink connection supports up to 50GB/s transfer rate (25GB/s each way simultaneo­usly), well over 10 times what a single PCIe 3.0 channel can transfer, allowing for fast and seamless data throughput between the cards. The V100 cards supported up to six NVLink connection­s for a maximum transfer rate of 300GB/s

When it comes to mainstream cards, the RTX 2080 Ti supports two NVLink connection­s for a total of 100GB/s, while the RTX 2080 supports only one, for 50GB/s. Unlike SLI, NVLink doesn’t act as a display interface, merging video outputs together like a scheduler, then back through the GPU-out. Instead, thanks to its super-low latency and high bandwidth, it allows multiple GPUs to pool their resources together as a shared entity, and communicat­e across the bridge, reducing traffic through the PCIe bus below, making the cards more efficient.

It’s difficult to say just how much of an impact this will have on the gaming scene right now, but Hothardwar­e.com suggests that in ShadowofWa­r, at 1440p, there’s as much as a doubling of performanc­e when using NVLink versus traditiona­l SLI, so far as scaling is concerned.

 ??  ?? PCIe 3.0 may finally be on its way out. NVLink surpasses the world of the PCIe barrier.
PCIe 3.0 may finally be on its way out. NVLink surpasses the world of the PCIe barrier.

Newspapers in English

Newspapers from United States