STACKED MEMORY CHIPS
HBM involves stacking memory chips vertically like a skyscraper. In the case of the Radeon R9 Fury X and Fury, four such memory chip towers are arranged in close proximity around the GPU die. Each tower consists of four 256MB dies stacked on top of a logic die, which amounts to 1GB per tower and a total of 4GB per card.
Both the HBM towers and the GPU sit atop an ultra-fast siliconbased interconnect called an interposer, which connects the memory to the GPU. They are all linked to each other and the interposer via microscopic wires called Through-Silicon Vias (TSVs) and structures called microbumps.
Finally, the interposer itself is positioned on top of the package substrate. This on-package integration of memory and GPUs actually isn't new – one of the solutions to increase the speed or bandwidth of a particular component has always been to integrate it onto the CPU or GPU die.
However, this has always proved costly, and attempts to integrate DRAM onto CPU dies have run into significant space constraints. With HBM, the principle of increasing bandwidth by decreasing memory proximity to the die is exploited, space constraints are overcome by stacking the memory chips, and on-package (as opposed to on-die) integration keeps costs down.