Maximum PC

The Current State of Integrated Graphics

THE WORLD OF INTEGRATED GRAPHICS has a long and storied history, most of which can be summed up as: they’re too slow and lacking in functional­ity. For years, the most “popular” graphics chips have been Intel’s, not because they were fast but because they

-

The good news is that Intel started putting more effort into improving graphics when it created HD Graphics, in 2010. The latest iteration is its 600 series, in the new Kaby Lake CPUs, which is largely the same as the 500 series. A huge chunk of the processor is devoted to graphics—roughly one third of the die. But even so, performanc­e isn’t stellar.

I ran tests on 15 modern games, and even at low quality settings and 1280x720, less than half were playable. But there is some good news in that all 15 games rendered without any noticeable errors. Less

demanding games such as Over watch, League of Legends, Counter-Strike: Global Offensive, Dot a 2, and many indie titles are also definitely playable.

Perhaps more interestin­g is what Intel has been doing over the past seven generation­s of Core processors. Intel more than quadrupled its mainstream integrated graphics performanc­e going from first-gen to third-gen Core, but since then, most of the desktop parts (Broadwell being the exception) haven’t seen much improvemen­t. In raw computatio­nal power, the fastest GT2 variants of Intel’s HD Graphics have sat in the 400-450Gflops range since Haswell’s HD 4600 in late 2012.

AMD is only slightly better, with its top APUs, like the A10-7890K, sitting at 887Gflops, while lower-tier parts like the A8-7670K (581Gflops) and A6-7470K (410Gflops) are well off the pace. More critically, all of these parts, including desktop Skylake and Kaby Lake at 440Gflops, are less than half the performanc­e of the GTX 1050 (1,733Gflops) and RX 460 (2,150Gflops) that represent entrylevel dedicated GPUs. That’s a big part of why most gamers end up running a dedicated graphics card.

Other factors include sharing system memory bandwidth. Even with two or three times as many GPU cores, without increasing the memory bandwidth, a lot of potential performanc­e is lost. An HBM2 cache is a potential solution, and Intel has used eDRAM caches with its Iris products, but few desktop users are interested in improving integrated graphics when a quick upgrade to a dedicated graphics card can more than double performanc­e.

Consoles are a different matter, where economies of scale come into play, and by targeting a closed platform, manufactur­ers can do more with less. The PS4’s custom AMD processor uses about twothirds of the die space on graphics, with the relatively slow Jaguar CPU cores relegated to one eighth of the die. The PS4 also uses GDDR5, with a 256-bit interface clocked at 5,500MT/s, with over four times the bandwidth of a dual-channel DDR42667 configurat­ion, and the PS4 Pro should double GPU performanc­e.

I’m eager to see what AMD does with the upcoming Zen APUs. Will it include a high-speed memory cache, along with substantia­lly more GPU cores? It could, and I’ve seen rumors of an APU with potentiall­y 1,024 Vega GPU cores— twice the number found in AMD’s current top APUs. Combined with stacked memory (a single 2–4GB HBM2 stack, perhaps), AMD could end up surpassing the performanc­e of its RX 460. Certainly, AMD has the expertise to pull this off, but I’m not convinced it will be enough to capture the interest of gamers, with price being a major factor. AMD might end up cannibaliz­ing sales of its $100-200 graphics cards in order to sell more $100-200 APUs.

A quick upgrade to a dedicated graphics card can more than double performanc­e.

Jarred Walton has been a PC and gaming enthusiast for over 30 years.

 ?? Jarred Walton ??
Jarred Walton
 ??  ?? AMD’s Kaveri APU uses half the die space on graphics functional­ity.
AMD’s Kaveri APU uses half the die space on graphics functional­ity.

Newspapers in English

Newspapers from United States