Home > technical > SK hynix HBM4: 2.8 TB/s, 40% Less Power, 16-Hi Stack for Next-Gen AI

SK hynix HBM4: 2.8 TB/s, 40% Less Power, 16-Hi Stack for Next-Gen AI

2025-12-10 09:43:47 CORE View times 308

  HBM4 represents SK Hynix’s sixth-generation High-Bandwidth Memory and currently stands as the most advanced AI memory solution in the industry, with mass production scheduled for the fourth quarter of 2025.

SK Hynix HBM4

  High Bandwidth Memory (HBM) is a high-performance, high-density memory technology that enables vertical integration of multiple DRAM dies through stacking, thereby significantly improving data processing speeds compared to conventional DRAM architectures. The HBM family comprises six generations to date: HBM, HBM2, HBM2E, HBM3, HBM3E, and the latest iteration, HBM4.(SK Hynix HBM3

SK hynix HBM

  Key Specifications

  Bandwidth: Exceeds 2.8 TB/s (equivalent to 2× HBM3E) via a 2,048-bit I/O interface

  Speed: Over 10 Gb per pin, surpassing the JEDEC standard of 8 Gb per pin

  Stack Height: 16-die stack (16-Hi) utilizing 1bnm DRAM technology

  Power Efficiency: Approximately 40% improvement over HBM3E

  Capacity: Up to 48 GB per stack (based on 16 × 24 Gb dies)

SK hynix HBM4

  Enabling Technologies

  Advanced MR-MUF (Mass Reflow Molded Underfill): A proven packaging technology enabling reliable 16-Hi die stacking with enhanced warpage control and superior thermal dissipation.

  Logic-based Base Die: Fabricated using a 14 nm logic process at a leading foundry, integrating PHY, power and clock distribution networks, and test circuitry, resulting in more than 40% reduction in dynamic power consumption.

  1bnm DRAM (fifth-generation 10 nm-class process): Delivers the highest bit density and mature manufacturing yield.

  System Benefits for AI Workloads

  Enables up to 69% higher AI service throughput compared to accelerators equipped with HBM3E.

  Overcomes the memory bottleneck in training and real-time inference of trillion-parameter-scale models.

  Reduces data center energy cost per peta-FLOP by approximately 40%.

SK hynix HBM4 cuts power 40%

  Reliability and Production Readiness

  Compliant with JEDEC JESD-238 standards; engineering samples have been validated and qualified at major GPU and ASIC partner sites.

  Mass production lines are already qualified at SK hynix’s Cheongju M15X fabrication facility, with customer shipments aligned with the anticipated ramp-up of AI accelerators in 2026.

   Target Applications

  Designed for next-generation AI accelerators, including GPUs, TPUs, custom ASICs, and high-performance computing CPUs that demand the highest bandwidth per watt and minimal board footprint.

If you need to know more about and purchase SK Hynix products, you can consult CORE customer service or fill out the RFQ.

WhatsApp

WhatsApp

Email

Email

RFQ

RFQ

Facebook

Facebook