For the ones who don't know:
QuoteDDR5-5600, Dual-Channel
2 * 64-bit * 5600 / 1000 / 8 = 89.6 GB/s theoretically and practically:
Quote84943 MB/s
AMD "Strix Halo" APU is also (up to) 128 GB RAM, but is quad-channel (entry level workstation memory bandwidth territory) (and has a better iGPU) (of course, also more expensive):
4 * 64-bit * 8000 / 1000 / 8 = 256 GB/s.
The higher the memory bandwidth, the faster AI / LLMs will run on your PC.
Ok, but why/who would need 128 GB RAM all of a sudden?
The latest hype is AI/running LLMs. Download e.g. LM Studio (wrapper for llama.cpp) and try it out.
The difference to ChatGPT is that it is private (and, to be fair, not as good as the best that proprietary AI has to offer, but it's catching up quick (and if the open-weight LLMs are always ~1 year behind, does it matter? (the difference is actually getting closer: as per artificialanalysis.ai/?intelligence-tab=openWeights report)).
You can also download llama.cpp directly and the LLMs can be downloaded from huggingface.co.
Currently SOTA models that fit into 128 GB RAM (or fit using a decent quant, not the full FP16) are e.g.:
huggingface.co/unsloth/GLM-4.5-Air-GGUF
huggingface.co/unsloth/gpt-oss-120b-GGUF (from the makers of ChatGPT)
huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF
huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF
Better, but won't fit:
huggingface.co/unsloth/Qwen3-235B-A22B-Instruct-2507-GGUF (tho the 2-bit and 3-bit quants might)
huggingface.co/unsloth/GLM-4.5-GGUF
huggingface.co/unsloth/DeepSeek-V3.1-GGUF
and many more and thinking variants like huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507.
News and infos about AI / LLMs: reddit.com/r/LocalLLaMA/top/
NOTEBOOKCHECK, maybe write an article to explain to your readers about the AI LLM self-hosting ability or why 128 GB are a thing all of a sudden, because normally, 32 GB RAM are enough for most people.