2200 USD + taxes on top still? for only 32 GB RAM and it's unfortunately soldered, too; a slow iGPU and no dGPU, either.
Does not deserve the "AI" in its product name: Only 32 GB RAM (nothing special) and slow iGPU. AI requires these things:
- Memory size to fit a decently capable LLM.
- Prompt processing: The larger the input, the faster the GPU you'd need, especially for agentic workflows.
Here, the iGPU scores 3614 Points in 2560x1440 Time Spy Graphics. This is ok for a few sentences of input, if you want an instant-ish replay.
- Token generation: The speed of the output generation depends on memory speed (aka memory bandwidth).
The memory score 99980 MB/s, this is not too bad for a 128-bit (99% of all PCs/laptops are 128-bit). But this is nothing special. A Strix Halo system is 256-bit wide and, as such, is twice as fast.
- (The number of CPU threads doesn't matter for running AI (aka inferencing) (4 threads pretty much top out a dual-channel PC)).)
32 GB RAM may not enough for the new SOTA LLM, again, especially in agentic workflows: Qwen3.6-35B-A3B-UD-Q4_K_M:
Quote from: reddit.com/r/LocalLLaMA/comments/1sq94qx/is_anyone_getting_real_coding_work_done_with.. I've come to the conclusion that (1) 32768 is the biggest context I can get away with in an adequately smart model, and (2) it just ain't enough.
And it's not like Windows requires less RAM than MacOS.
64 GB RAM would have to be the bare minimum for this price (and AI-in-product-name), but even 64 GB RAM can't fit the very popular Gpt-Oss-120B.
And why is the RAM running at 8533 MT/s and not 9600 MT/s. Don't get me wrong, 8533 MT/s is better than the usual 5600 MT/s and comes closer to AI's requirements (faster memory -> faster token generation, and since this has no dGPU, also faster prompt processing).
This doesn't replaces a carryable laptop, but: For 2200 you could build a desktop PC that is much more capable: While the RAM would be slower, because a desktop PC's RAM usually runs at 5600 MT/s to 6200 MT/s, the RAM size would be 96 to 128 GB RAM (2x64 GB per stick is possible), you'd have a dedicated GPU for much faster prompt processing, and also faster token generation, because parts of the LLM are offloaded to the GPU's much faster VRAM, and the desktop PC will be repairable, upgradable and run quieter.