Samsung’s latest memory technology has hit staggering speeds of 9.8Gb/s – or 1.2TB/s – meaning it’s more than 50% faster than its predecessor.
The HBM3E memory standard, nicknamed Shinebolt, is the latest in a series of high-performance memory units Samsung has developed for the age of cloud computing and increased demand for resources.
Shinebolt is a successor to Icebolt, which comes in varieties up to 32GB, can reach speeds of up to 6.4Gb/s. These chips were designed specifically to be used with the best GPUs out there in AI processing and LLMs, with the firm ramping up production this year as the emerging industry gathers momentum.
Powering the next generation of AI chips
HBM3E will inevitably find its way into the components developed by the likes of Nvidia, with every suggestion it can find its way into the GH200, nicknamed Grace Hopper, in light of a recent deal it struck.
High-bandwidth memory (HBM) is much faster and more energy efficient than conventional RAM, and uses 3D stacking technology which lets the layers of chips to be stacked on top of each other.
Samsung’s HBM3E stacks layers higher than in previous iterations through the use of non-conductive film (NCF) technology, which eliminates gaps between the layers in the chips. Thermal conductivity is maximized, and it can ultimately hit much higher speeds and efficiency as a result.
The unit will power the next generation of AI applications, Samsung claims, as it’ll speed up AI training and inference in data centers and improve the total cost of ownership (TCO).
Even more exciting is the prospect that it’ll be included in Nvidia’s next-gen AI chip, the H200. The two companies struck an agreement in September in which Samsung would supply the chipmaker with HBM3 memory units, according to the Korea Economic Daily, with Samsung set to supply roughly 30% of Nvidia’s memory by 2024.
Should this partnership continue, there’s every possibility HBM3E components will become part of this deal once they enter mass production.