News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Post reply

Other options
Verification:
Please leave this box empty:
Shortcuts: ALT+S post or ALT+P preview

Topic summary

Posted by Logoffon
 - Today at 17:34:46
Quote from: I love misusing name field for topic header on Today at 11:15:561700 bucks for 32 GB of soldered RAM and no additional memory since there's no dedicated GPU/no VRAM either. No GPU is ok since it's only 1kg.

Too little ram for good LLMs:
With the AI LLM self-hosting being a thing now, 32 GB RAM are kinda low (minus 8 GB for the OS,etc.) and soldered are a bummer. At least 48 GB RAM would have been nice. 48 GB RAM and 64 GB RAM options need to be available for the 2026 releases.

huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF: A decent LLM at a decent Q8_0 (32.5 GB) quant simply won't fit.
huggingface.co/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF (yes, support in llama.cpp has been merged not even 24h ago: github.com/ggml-org/llama.cpp/pull/16095): 64 GB RAM could fit it the minimum recommended Q4_K_M (48.4 GB) quant.

32 GB RAM for self-hosting of good LLMs is like 8 GB VRAM for gaming: Obsolete: youtube.com/watch?v=ric7yb1VaoA ("Gaming Laptops are in Trouble - VRAM Testing w/ ‪@Hardwareunboxed‬")
So you apparently not only have a fetish of naming yourself as the "topic" you're talking about, but also a fetish of running LLMs on devices that made for virtually anyone who have no desire of doing that.

As someone else already said in another thread where you complained about the same thing:
Quote from: Worgarthe on November 26, 2025, 18:31:29There are other things in life too, more productive and fulfilling ones also. You don't buy a microwave if you need a conventional oven (and vice versa).
Posted by little ram for good LLMs
 - Today at 11:15:56
1700 bucks for 32 GB of soldered RAM and no additional memory since there's no dedicated GPU/no VRAM either. No GPU is ok since it's only 1kg.

Too little ram for good LLMs:
With the AI LLM self-hosting being a thing now, 32 GB RAM are kinda low (minus 8 GB for the OS,etc.) and soldered are a bummer. At least 48 GB RAM would have been nice. 48 GB RAM and 64 GB RAM options need to be available for the 2026 releases.

huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF: A decent LLM at a decent Q8_0 (32.5 GB) quant simply won't fit.
huggingface.co/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF (yes, support in llama.cpp has been merged not even 24h ago: github.com/ggml-org/llama.cpp/pull/16095): 64 GB RAM could fit it the minimum recommended Q4_K_M (48.4 GB) quant.

32 GB RAM for self-hosting of good LLMs is like 8 GB VRAM for gaming: Obsolete: youtube.com/watch?v=ric7yb1VaoA ("Gaming Laptops are in Trouble - VRAM Testing w/ ‪@Hardwareunboxed‬")
Posted by test
 - Yesterday at 22:54:05
I just ran a cinebench 15 loop on this laptop and it starts at 2430pt like the review, but drops to 1650pt. I wonder why the review has it maintaining at 1900pt.
Posted by SmoothOperator
 - August 30, 2025, 03:59:41
Yeh, with 64 GB RAM I would seriously consider this laptop, but they made it, as the sellers put it, "for most people should be **more** then enough".
I never met those "most people", those are like a green aliens, everybody know about "them", no one ever saw one.
Posted by Antonio123
 - August 29, 2025, 12:40:06
And does it have a slot for 2nd SDD?
Posted by Antonio123
 - August 29, 2025, 10:30:01
Thank you for the fast and detailed review!
Do you think any of the problems you found will be fixed in subsequent driver updates?
Posted by Redaktion
 - August 28, 2025, 16:35:18
Honor updates its slim and light MagicBook Art 14 with Intel's Arrow Lake processors as well as a new OLED screen, that is supposed to reach up to 1600 nits. The keyboard was improved as well, and the magnetic webcam is still very practical.

https://www.notebookcheck.net/1-kg-Ultrabook-with-Arrow-Lake-and-excellent-input-devices-Honor-MagicBook-Art-14-2025-Review.1098985.0.html