The next-gen HBM4 stacks exceed JEDEC and Nvidia specifications with speeds up to 13 Gbps and bandwidth of 3.3 TB/s, while delivering higher energy efficiency, improved thermals, and up to 36 GB capacity per 12‑layer stack. This is the priciest memory Nvidia has ever ordered for AI data center cards, though.