First Look: AMD’s High-Bandwidth Memory
A “revolutionary” solution to GDDR5 inefficiency?
GDDR5 WILL SOON stall GPU performance growth, says AMD, because, according to the red team, GDDR5 is entering an inefficient region of the power-to-performance curve.
Historically, AMD would try and solve these power-to-performance issues by shrinking chips and integrating functions, but the company says that on-chip integration isn’t ideal for DRAM, as DRAM is not size- or cost-effective for integration in a logic-optimized process.
You could theoretically scale GDDR5 to be faster, but this requires more bandwidth and would consume more power.
AMD is attempting to solve these issues by introducing its interposer, which brings DRAM closer to the logic die. According to AMD, this closer proximity enables a much wider bus width, which also improves the bandwidth per watt as well. AMD says that bandwidth per watt is much more important than the sheer amount of RAM a graphics card has. And in case you were wondering, each stack here amounts to 1GB. So, if AMD’s hypothetical next-gen GPU were to have 4GB of high-bandwidth memory, there would be four stacks.
The benefit of using the interposer along with this high-bandwidth memory method is that it takes up much less surface area, and this combination gives you much more bandwidth than GDDR5, at less than 50 percent the power consumption.
While this will be applicable to discrete graphics cards, AMD believes it will also be able to leverage the technology to cover multiple verticals, including APUs, consumer applications, enterprise solutions, and more.
The company is calling HBM a “revolution in chip design” that will ultimately allow for up to three times the performance per watt compared to GDDR5, and will consume 94 percent less PCB surface than GDDR5.