GPUS AREN’T JUST FOR GAMES
The advent of modern GPUs has radically altered the supercomputer landscape. Video cards used to focus primarily on spitting out pixels to your display, but when programmability entered the picture, it was only a matter of time before that computational power would come to be put to other uses.
CPUs remain important for generalpurpose workloads, running the operating system, browsing the web, and even powering much of the logic behind the graphics in our games. Trying to make a computer where everything ran on the GPU would knock performance in certain tasks back to the proverbial stone age. But if you want to do lots of similar calculations in parallel, that’s precisely what a GPU does for graphics.
The 3D game worlds of our day can include millions of polygons. Turning all that geometry into a meaningful twodimensional image on your monitor requires a lot of matrix math calculations.
Each point on a polygon consists of X, Y, and Z coordinates, and changing the viewing angle and position of objects within the virtual world is done via linear algebra using matrix multiplication and addition.
As such, GPUs have gone from a few simple processing cores back in the late-1990s to our modern chips with thousands of cores, each capable of doing a massive heap of math calculations.
Not surprisingly, the high-performance computing world took notice. GPUs first started showing up in supercomputers around 2007, and it didn’t take long for them to gain traction. By 2010, the world’s fastest supercomputer, China’s Tianhe-1A, packed in 7,168 Nvidia Tesla M2050 GPUs with 14,366 Intel Xeon CPUs. Today, the ratio has shifted decidedly more in favor of GPUs, with the Summit supercomputer having three times as many GPUs as CPUs.