News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

This is the most powerful 16-inch AMD Lenovo laptop: ThinkPad P16s Gen 4 review with Ryzen AI 9 HX

Started by Redaktion, Yesterday at 23:26:39

Previous topic - Next topic

Redaktion

It is the triumphant return of AMD CPUs to the ThinkPad P series: With Strix Point, Lenovo brings back the Lenovo ThinkPad P16s AMD back from the dead. The new P16s Gen 6 AMD features the AMD Ryzen AI 9 Pro HX 370, one of the most powerful Strix Point CPUs for laptops.

https://www.notebookcheck.net/This-is-the-most-powerful-16-inch-AMD-Lenovo-laptop-ThinkPad-P16s-Gen-4-review-with-Ryzen-AI-9-HX.1135080.0.html

Worgarthe

QuoteThe Lenovo ThinkPad P16s Gen 4 AMD is the most powerful Lenovo laptop with a 16-inch screen
...after P1 G8 and P16 Gen 3.

moe69230

I think the author meant the most powerful P16s models.

I would have bought this laptop if it had a better screen than 1200p; for a 16-inch display, that resolution is quite low.
We'll have to wait for Gen 5 to see what Lenovo does with thr powerful AMD version of this model.

Maybe in a few years, we'll see people trying to replace the panel of the full-screen assembly. (If it is possible)

AI / LLMs ?

QuoteDifferent from the older P16s Gen 2 AMD, the RAM is not soldered. Instead, the Lenovo ThinkPad P16s G6 has two SO-DIMMs for up to 96 GB DDR5-5600 memory.
This is good, because 96 GB would allow to fit and run AIs / LLM like Gpt-Oss-120B quite easily (natively ~4-5-bit per weight aka Q4 quant), actually even quite larger ones quantized down to 4-bit to 5-bit per weight (below 4-bit per weight and the performance may degrade exponentially) (e.g.: if a 180B parameters LLM at full FP16 = 360 GB, then Q8 = 180 GB and Q4 = 90 GB, this may just fit) (or use a smaller LLM, but use a higher bit-per-weight quant, like Q8).
With no dGPU I'd like this laptop to stand out a bit more tho: Like supporting up to 128 GB RAM and having the CAMM2 / LPCAMM2 RAM standard, which would allow the RAM to run at a faster MT/s, still being upgradable, and give it a higher memory bandwidth: LLMs' token generation (tg) speed linearly depends on the MT/s value (5600 MT/s to 8500 MT/s = 50% faster token generation). RAM can also go bad, and if it was soldered, one couldn't replace and repair it.
But: Having no dGPU means slower prompt processing (pp) (the larger the input, the slower) (by the same ratio that a dGPU have a higher FPS value than this iGPU, so would be the prompt processing). With dGPU, partically offloacing (or if you have a small LLM, then fully) a LLM to the GPU's VRAM would also speed up the token generation (tg).

QuoteWithout a dedicated graphics chip, it simply can not compete.
Quotebad GPU performance (for a workstation)
Despite what I said above, it depends on the user' use-case. Not every workstation has to have all features (100% AdobeRGB coverage, ECC RAM, dedicated workstation GPU with ECC VRAM, ..).

Since it's a workstation, maybe the question if ECC RAM is supported could be answered. (Technically adding ECC functionality would be cheap, so it's an artificial marketing segmentation at best.)

QuoteOLED not available with Ryzen 9
This is weird. If this laptop' IPS had display P3 coverage, then I'd say OLED would only be required for gaming (due to its fast pixel response times), but this has no dGPU, so no OLED is fine and also OLED consumes more power? and would need to be 120 Hz to make really sense for gaming, but then it would also require an even faster dGPU to more likely reach the 120 Hz.
Static content, which may happen more often on a workstation, on OLED may still be an issue, idk tho, because OLED, as you say, is available on the Ryzen 5 and Ryzen 7 configurations.

davidm

Skimmed the article for "halo," "395," nothing. It's a bit of a joke. It can take up to 96GB RAM, yet the 395+ has 128GB of much faster RAM that generally makes it up to twice as fast for many operations and effectively gives it a mid range GPU with massive fast RAM. What is Lenovo thinking, and why is nbc not pointing it out? Disappointing.

slws

"the device does get rather loud, and it runs pretty hot. Additionally, the GPU performance is weak for a workstation"

Thank you for a great summary and saving my time. Stopped reading here.

AI / LLMs ?

davidm, this is a non-AI specific business workstation laptop: If it had the Strix Halo chip (a 256-bit chip), the price would be (much?) higher, but I agree, this is maybe a missed opportunity, if the price wouldn't be much higher, but also this is not "ThinkPad P16s Gen 4 AI" edition, if it was and didn't have the Strix Halo APU chip, then I'd complain, too. (You are correct, Strix Halo would be over 2x the speed for LLMs: This is a classic 128-bit memory bus width, like over 90% out there, running at 5600 MT/s, if this was Strix Halo, it would be 2.85 times faster (2*8000/5600). But to reach 8000 MT/s, the RAM would be soldered, if no CAMM2 / LPCAMM2 would be used. CAMM2 / LPCAMM2 isn't exactly new so hopefully soon.
If people complain so much that the GPU is weak, then maybe Strix Halo is a missed opportunity, but again, it depends on the price, but for anything other than LLM inferencing, where an AMD GPU works, for finetuning, etc. a NVIDIA GPU would be needed/highly recommended (I think AMD has no real CUDA replacement so far).

Quick Reply

Name:
Email:
Verification:
Please leave this box empty:
Shortcuts: ALT+S post or ALT+P preview