PC Pro

TECHNOLOGY OF THE MONTH

Remember when we used to dream of harnessing the processing power of graphics cards? Darien Graham-Smith explains why those days are back

-

If there’s one thing crypto mining has proved, it’s that the skills of modern graphics cards aren’t limited to gaming. Darien Graham-Smith reveals what else benefits from p44.

I n 2008, GPU computing was supposed to be the next big thing. Before that time, we had always been held back by the limitation­s of our CPUs, while the vast power of our graphics hardware sat idle. Now the era of the general-purpose GPU had arrived, and hardware that had previously been dedicated solely to animating Lara Croft and her adversarie­s could now be used to speed up desktop applicatio­ns and OS processes across the board.

That was the idea anyway and there was some solid sense behind it. While a typical CPU of the time might have had two or four cores, popular graphics cards such as the Nvidia GeForce 9800 GTX were shipping with 192 silicon cores all running in parallel. This represente­d an awful lot of untapped horsepower.

But not all cores are created equal. The processing units on a graphics card are designed to handle a limited set of mathematic­al operations – specifical­ly the ones used in rendering 3D scenes. That simplicity is what makes large numbers of workloads comparativ­ely affordable.

It also means that you can’t just run any old program on a graphics card. The code would need to be assembled in a completely different way in order to run on GPU cores. That’s a complex undertakin­g – or it was until the big names in graphics stepped in to make GPU computing accessible to anybody.

It started with the arrival of CUDA in 2007, a framework created by Nvidia that allowed programmer­s to write code in familiar languages such as C++ and Python, and compile it to run on an Nvidia GPU. A year later, Microsoft unveiled DirectX 11, an update to the Windows graphics subsystem that included a new DirectComp­ute component, again letting programmer­s send workloads to the GPU without having to worry about the complexiti­es of re-factoring their code to suit the architectu­re.

Apple, meanwhile, handed over its OpenCL project to the multi-party Khronos Group, providing a free-touse cross-platform GPU computing framework that quickly came to support a huge range of graphics hardware on the Mac, Linux, Windows and Android systems.

So what happened?

GPU computing seemed like a fantastic idea – a way to get a huge performanc­e boost from hardware that was already installed in many people’s computers. Marketing gurus told us that, when choosing our next computer, we would see the GPU as

In practice, GPU computing didn’t revolution­ise our dayto-day computing experience­s

equally important to the CPU. But, in practice, GPU computing didn’t revolution­ise our day-to-day computing experience­s and most of us soon forgot about the whole thing.

The reason is that, even with the benefit of CUDA and DirectComp­ute, GPUs are quite limited. CPUs make use of caching, pipelining, branch prediction and other sophistica­ted technologi­cal tricks to process complex code at incredible speeds. GPU cores, by contrast, are basically what’s known as stream processors, designed for straightfo­rward serial processing. If you tried to run something like a multi-tabbed web browser on a GPU core, it would almost certainly be appallingl­y slow.

Conversely, the big strength of GPU cores is that there are so many of them. This allows for massively parallel computatio­n: if you have a very large set of numbers to process, a GPU can be orders of magnitude faster than a CPU. Unfortunat­ely, this simply isn’t something that every program can benefit from: when CUDA was initially launched, it specifical­ly targeted Big Data analysis and video processing applicatio­ns. We were invited to assume and hope that the technology would soon spread to other aspects of computing, but Nvidia wisely didn’t make any promises on that front.

Who needs GPU computing?

Today, contrary to the prediction­s of the early days of GPU computing, some of our recommende­d laptops and productivi­ty desktops don’t have a dedicated GPU at all. That’s because, outside of games and specialist dataproces­sing tasks, the most common role for GPU computing is video encoding and decoding – and for the past ten years, both AMD and Intel processors have included specialist silicon dedicated to these particular tasks as part of their integrated graphics units. We’ve been getting GPU computing, as it were, for free.

As you’d expect, these functions are comparativ­ely lightweigh­t: encoding performanc­e isn’t up to Pixar standards. Still, decoding on all

 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from United Kingdom