STACKED MEM­ORY CHIPS

HWM (Malaysia) - - LEARN -

HBM in­volves stack­ing mem­ory chips ver­ti­cally like a sky­scraper. In the case of the Radeon R9 Fury X and Fury, four such mem­ory chip tow­ers are ar­ranged in close prox­im­ity around the GPU die. Each tower con­sists of four 256MB dies stacked on top of a logic die, which amounts to 1GB per tower and a to­tal of 4GB per card.

Both the HBM tow­ers and the GPU sit atop an ul­tra-fast sil­i­con­based in­ter­con­nect called an in­ter­poser, which con­nects the mem­ory to the GPU. They are all linked to each other and the in­ter­poser via mi­cro­scopic wires called Through-Sil­i­con Vias (TSVs) and struc­tures called mi­crobumps.

Fi­nally, the in­ter­poser it­self is po­si­tioned on top of the pack­age sub­strate. This on-pack­age in­te­gra­tion of mem­ory and GPUs ac­tu­ally isn't new – one of the so­lu­tions to in­crease the speed or band­width of a par­tic­u­lar com­po­nent has al­ways been to in­te­grate it onto the CPU or GPU die.

How­ever, this has al­ways proved costly, and at­tempts to in­te­grate DRAM onto CPU dies have run into sig­nif­i­cant space con­straints. With HBM, the prin­ci­ple of in­creas­ing band­width by de­creas­ing mem­ory prox­im­ity to the die is ex­ploited, space con­straints are over­come by stack­ing the mem­ory chips, and on-pack­age (as op­posed to on-die) in­te­gra­tion keeps costs down.

Newspapers in English

Newspapers from Malaysia

© PressReader. All rights reserved.