Quote from: I love misusing name field for topic header on Today at 11:15:561700 bucks for 32 GB of soldered RAM and no additional memory since there's no dedicated GPU/no VRAM either. No GPU is ok since it's only 1kg.
Too little ram for good LLMs:
With the AI LLM self-hosting being a thing now, 32 GB RAM are kinda low (minus 8 GB for the OS,etc.) and soldered are a bummer. At least 48 GB RAM would have been nice. 48 GB RAM and 64 GB RAM options need to be available for the 2026 releases.
huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF: A decent LLM at a decent Q8_0 (32.5 GB) quant simply won't fit.
huggingface.co/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF (yes, support in llama.cpp has been merged not even 24h ago: github.com/ggml-org/llama.cpp/pull/16095): 64 GB RAM could fit it the minimum recommended Q4_K_M (48.4 GB) quant.
32 GB RAM for self-hosting of good LLMs is like 8 GB VRAM for gaming: Obsolete: youtube.com/watch?v=ric7yb1VaoA ("Gaming Laptops are in Trouble - VRAM Testing w/ @Hardwareunboxed")
So you apparently not only have a fetish of naming yourself as the "topic" you're talking about, but also a fetish of running LLMs on devices that made for virtually anyone who have no desire of doing that.
As someone else already said in another thread where you complained about the same thing:
Quote from: Worgarthe on November 26, 2025, 18:31:29There are other things in life too, more productive and fulfilling ones also. You don't buy a microwave if you need a conventional oven (and vice versa).