Why would AI datacenters care for DDR5? Most AI servers use HBM3 not DDR5. If they are produced in same fab though, then the fab capacity can be going towards HBM3 instead of DDR5 would make sense.
High Bandwidth Memory (HBM) sits at the center of the AI boom. Accelerators from leading GPU vendors rely on stacked HBM3 and HBM3E modules, and meeting that demand requires re-tuning fabs and advanced packaging lines. Reporting in Korea has highlighted how top suppliers are allocating more capacity to HBM, which effectively reduces output for general-purpose DRAM like DDR5 and DDR4.