Ok, but why/who would need 64 GB or 128 GB RAM all of a sudden? The latest hype is AI/running LLMs locally and it's private. Download e.g. LM Studio (wrapper for llama.cpp) and try it out or download llama.cpp directly (recently got their webUI redesigned) and the LLMs can be downloaded from huggingface.co). Open-weight LLMs are getting better and better, see "Artificial Analysis Intelligence Index by Open Weights vs Proprietary": artificialanalysis.ai/?intelligence-tab=openWeights.
Current SOTA models (basically the filesize must fit into your GPU's VRAM and/or system RAM and you can run them, just make sure to maybe not go below a Q4_K_M quant, because then the performance is going to degrade heavily, especially for LLMs below, say, 200B parameters size):
- huggingface.co/openai/gpt-oss-20b (from the makers of ChatGPT) (downloads last month: 6,749,481 and you never heard of it?) (can fully fit on a 16 GB GPU (mentioned on OpenAI's own description) and therefore will run very fast)
- huggingface.co/unsloth/gpt-oss-20b-GGUF (same but the GGUF format allows to run not only on GPU, but if you don't have enough VRAM, to also partially or fully offload to RAM) ("guide : running gpt-oss with llama.cpp": github.com/ggml-org/llama.cpp/discussions/15396)
- huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF
- huggingface.co/unsloth/GLM-4.5-Air-GGUF (106 B)
- huggingface.co/unsloth/gpt-oss-120b-GGUF (also from the makers of ChatGPT (huggingface.co/openai/gpt-oss-120b))
- huggingface.co/unsloth/Qwen3-235B-A22B-Instruct-2507-GGUF
- huggingface.co/unsloth/GLM-4.5-GGUF (355 B)
and many more, including thinking/reasoning variants (some LLMs, like gpt-oss, have easily configurable reasoning efforts: low, medium, high), like huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507, specialized ones, like Qwen3-Coder-30B-A3B-Instruct-GGU and big ones, like DeepSeek-V3.1-Terminus-GGUF (671 B).
News and infos about AI / LLMs e.g.: reddit.com/r/LocalLLaMA/top, reddit.com/r/LocalLLM/top
NOTEBOOKCHECK, maybe write an article to explain to your readers about the AI LLM self-hosting ability or why 64 GB RAM, 128 GB RAM or even more (AMD, 512-bit Strix Halo successor when?) are a thing all of a sudden now, because normally, currently 32 GB RAM are enough for most people.