News:

Willkommen im Notebookcheck.com Forum! Hier können Sie über alle unsere Artikel und allgemein über notebookrelevante Dinge diskutieren. Viel Spass!

Main Menu

HP EliteBoard G1a AI review: A PC hidden in plain sight

Started by Redaktion, Today at 00:20:01

Previous topic - Next topic

Redaktion

The EliteBoard can potentially clean up your cable spaghetti and be a huge space saver in the office — we just wish it were a little bigger with a few more ports.

https://www.notebookcheck.net/HP-EliteBoard-G1a-AI-review-A-PC-hidden-in-plain-sight.1287730.0.html

may not adequate for AI

Nothing in this deserves the AI in its product name:
  • The AI comes from the "AMD Ryzen AI 5 PRO 340" CPU, and its AI performance comes from its slow iGPU performance. Any AI performance is based on how fast the iGPU is. This Ryzen AI does not have any special hardware [again, its AI performance is based on its RDNA3.5 iGPU performance[1], which is super slow].
  • The RAM speed is nothing special at 5600 MT/s (="62210 MB/s"), not even 8000 MT/s, let alone 9600 MT/s, as with current systems.
  • 32 GB RAM may not enough for the new SOTA LLM, again, especially in agentic workflows: Qwen3.6-35B-A3B-UD-Q4_K_M:
    Quote from: reddit.com/r/LocalLLaMA/comments/1sq94qx/is_anyone_getting_real_coding_work_done_with.. I've come to the conclusion that (1) 32768 is the biggest context I can get away with in an adequately smart model, and (2) it just ain't enough.
  • Look out for the use-cases that the slow/50 INT8 TOPS NPU in this Ryzen can do, it's like blurring the background and that's about it.

For 1800 bucks you could get a whole laptop / Apple Air M5 with 32 GB RAM instead!, its RAM is also running at much faster 9600 MT/s (= 153.6 GB/s) and this could give you 9600/5600 = 70% faster AI token generation.

The following suggestion doesn't replaces this device in size, but: For 1800 you could build a desktop PC that is much more capable: More RAM at the same speed 5600 MT/s or slightly faster at, say, 6200) and you'd have a dedicated GPU for much faster prompt processing, and also faster token generation, because parts of the LLM can be offloaded to the GPU's much faster VRAM, and the desktop PC will be repairable, upgradable and the keyboard can be changed too.

[1]
QuoteThe Ryzen 350:

Quote from: amd.com/en/products/processors/laptop/ryzen/ai-300-series/amd-ryzen-ai-7-350.htmlOverall TOPS
        Up to 66 TOPS (I think it's 8-bit / INT8)
    NPU TOPS
        Up to 50 TOPS (same)
Quote from: nvidia.com/en-us/geforce/laptops/compareGeForce RTX 5050 Laptop GPU: 440 AI TOPS (4-bit, scammy NGREEDIA, so it's half that -- 220 -- in 8-bit)

GeForce RTX 4050-Laptop-GPU: 194 AI TOPS (8-bit)
194/66 = ~3 times, so it's 3 times slower.

3dmark.com/search:
4050 (notebook): Average score: 8288
Ryzen AI 350' 860M iGPU: Average score: 2885

8288/2885 = ~3 times, which is the same 3 times.

-> Looks like Ryzen AI 350' NPU has to be mainly understood as its iGPU, really. An iGPU is still an ASIC, the most power efficient way. Maybe a NPU is just marketing, instead of just saying it the way NVIDIA says it ("AI TOPS", no mention of a NPU).

Which tells us that it has been all along about what I said in my previous comment ("it's all about memory size, memory bandwidth and the usually, out of it, resulting GPU performance") ;)

AI requires these things:
  • Memory size to fit a decently capable LLM.
  • Prompt processing: The larger the input, the faster the GPU you'd need, especially for agentic workflows.
Here, the iGPU scores 1559 Points in 2560x1440 Time Spy Graphics. Compare this to a e.g. 5070: 3dmark.com/search: "Average score: 20330".
  • Token generation: The speed of the output generation depends on memory speed (aka memory bandwidth).
The memory speed is 62210 MB/s, this is in line with any dual-channel, 2*64-bit, 5600 MT/s (like 99% of all PCs/laptops are dual-channel 128-bit). So, this is nothing special.
  • (The number of CPU threads doesn't matter for running AI (aka inferencing) (4 threads pretty much tops-out a dual-channel PC))

may not adequate for AI

The [1] quote is from notebookchat.com/index.php?topic=295286.msg735872#msg735872.

Quick Reply

Name:
Email:
Verification:
Please leave this box empty:
Shortcuts: ALT+S post or ALT+P preview