News:

Willkommen im Notebookcheck.com Forum! Hier können Sie über alle unsere Artikel und allgemein über notebookrelevante Dinge diskutieren. Viel Spass!

Main Menu

Asus releases new 14-inch gaming laptop with 48 GB VRAM and 165 Hz display

Started by Redaktion, January 22, 2026, 13:42:21

Previous topic - Next topic

Redaktion

Asus has started selling its first 14-inch gaming laptop backed by AMD's Strix Halo architecture. Available in Europe, China, and Japan, the new gaming laptop can assign up to 48 GB of VRAM to its Radeon 8060S iGPU thanks to its built-in 64 GB of RAM.

https://www.notebookcheck.net/Asus-releases-new-14-inch-gaming-laptop-with-48-GB-VRAM-and-165-Hz-display.1209900.0.html


Buggi


48 and 64 GB RAM nice

48 GB RAM fits and runs LLMs like GLM-4.7-Flash at solid q8 quant (32 GB) (or Gpt-Oss-20B (14 GB), but this would also fit within 32 GB RAM).
Unfortunately 64 GB RAM won't fit Gpt-Oss-120B even the smallest quant (62.6 GB), but 96 GB RAM would fit it easily with a lot (or even full) of context (go to huggingface.co/spaces/oobabooga/accurate-gguf-vram-calculator, paste huggingface.co/unsloth/gpt-oss-120b-GGUF/blob/main/gpt-oss-120b-F16.gguf into the calculator and set context to full).

VPD

I mean, yeah. You can get the rtx 5050 version of this laptop for £1049. I don't think it's worth paying almost double for strix halo.

drive-by poster

so if gaming is your use case I suspect you have to be an AMD fanboy to justify it.

If AI is your use case it is a "Shut up and take my money" offering. I just bought a Strix Halo 128GB desktop and I think I could live with 64GB for a lot of what I am doing.

I would buy this in a heartbeat, if I were unwilling to have a desktop.

Haunter

Quote from: 48 and 64 GB RAM nice on January 22, 2026, 19:39:1148 GB RAM fits and runs LLMs like GLM-4.7-Flash at solid q8 quant (32 GB) (or Gpt-Oss-20B (14 GB), but this would also fit within 32 GB RAM).
Unfortunately 64 GB RAM won't fit Gpt-Oss-120B even the smallest quant (62.6 GB), but 96 GB RAM would fit it easily with a lot (or even full) of context (go to huggingface.co/spaces/oobabooga/accurate-gguf-vram-calculator, paste huggingface.co/unsloth/gpt-oss-120b-GGUF/blob/main/gpt-oss-120b-F16.gguf into the calculator and set context to full).

Well, you will run LLM on NPU. Not iGPU lmao. I bought laptop with Ryzen AI 350 and normal SO-DIMM, upgraded to 64GB and I can now use NPU with 32GB. It's faster, than iGPU, but you must use models optimized for it. Right now I'm playing with converting qwen3 coder to onnx, int8 or int4 (NPU has int8 HW optimization).

48 and 64 GB RAM nice

QuoteWell, you will run LLM on NPU. Not iGPU lmao.
It's the other way around. Big, popular, SOTA LLMs (that you can run using llama.cpp' WebUI and download the models straight from huggingface.co) are not run by the NPU at all, and it's not even supported on Linux e.g., but on GPU (/ iGPU) and memory. A NPU indeed requires specialized LLMs (like the ones built-in Windows 11), which makes a NPU almost useless so far (not saying that it may become a popular, power efficient, accelerator).

A NPU is never mentioned if you read the comments, it's all about memory size, memory bandwidth and the usually, out of it, resulting GPU performance, that are the key hardware requirements (of course, software support, like CUDA, mainly for training, too).

Sorry, but maybe you want to cope that you bought a more expensive and in this sense unnecessary Ryzen AI platform. Unless you can prove that a NPU is worth it using benchmarks (pp, tg, power efficiency and LLM performance).

Faster in what? prompt processing or token generation. Do a benchmark from like 1k to 64k (if it fits) of NPU vs iGPU. Maybe a NPU can be added with the iGPU, still, need to see where a NPU is actually useful, other than blurring the webcam background or fake where the eyes are looking at, more power efficiently (power efficiently matters a lot, but these use-cases are rather niche so far).

48 and 64 GB RAM nice

The Ryzen 350:
Quote from: www.amd.com/en/products/processors/laptop/ryzen/ai-300-series/amd-ryzen-ai-7-350.htmlOverall TOPS
    Up to 66 TOPS (I think it's 8-bit / INT8)
NPU TOPS
    Up to 50 TOPS (same)

Quote from: nvidia.com/en-us/geforce/laptops/compareGeForce RTX 5050 Laptop GPU: 440 AI TOPS (4-bit, scammy NGREEDIA, so it's half that -- 220 -- in 8-bit)

GeForce RTX 4050-Laptop-GPU: 194 AI TOPS (8-bit)

194/66 = ~3 times, so it's 3 times slower.

3dmark.com/search:
4050 (notebook): Average score: 8288
Ryzen AI 350' 860M iGPU: Average score: 2885

8288/2885 = ~3 times, which is the same 3 times.

-> Looks like Ryzen AI 350' NPU has to be mainly understood as its iGPU, really. An iGPU is still an ASIC, the most power efficient way. Maybe a NPU is just marketing, instead of just saying it the way NVIDIA says it ("AI TOPS", no mention of a NPU).

Which tells us that it has been all along about what I said in my previous comment ("it's all about memory size, memory bandwidth and the usually, out of it, resulting GPU performance") ;)

AnAttemptWasMade

@48 and 64 GB RAM nice:

Wouldn't the fact that panther lake is more efficient make it better suited for running LLMs locally? Sure it's slower individually but if someone bought a bunch of NUCs and linked them together, you would likely be able to have like twice panther lake systems then halo given a fixed limited power supply budget constraints (e.g. 1200W or 2400W).

48 and 64 GB RAM nice

AnAttemptWasMade, if Panther Lake is, say, all-in-all, 30% more power efficient, then, sure, it'd be 30%. It'd be 30% less out of not a lot in the first place, you decide if it's worth it. But what you want is a lot of RAM to be able to fit the good/smart LLMs, you also want not the worst GPU performance and Strix Halo (256-bit) is basically 2x vs Panther Lake (128-bit) and I personally wouldn't care less if PL is 30% more power efficient, if I would have to choose between Strix Halo 128 GB RAM vs Panther Lake 64 GB RAM.

Indeed, the next thing may be linking them together, but I don't know how hassle-free it is. Generally speaking, I personally like the idea, but now for small things like NUCs and in this case I would prefer one system. But if linking 4 Panther Lake NUCs (4 * 64 GB RAM) is easy, the perf scales decently well and using it is transparent / hassle-free and equally cheap vs 1 Strix Halo mini-PC, then one could consider it. But it's not going to be equally in price vs just 1 Strix Halo system, so..you decide. The lowest common denominator here is the size, if you _must_ have NUC-sized system(s), then sure, but if not, I'd build a desktop PC (AM5 B850 mainstream mobo supports 4 * 64 GB RAM (and a 4090 for much faster prompt processing)). (and when can one really fit a 4 NUCs or 2 Strix Halo mini-PCs into the room, but can't fit a mid sized ATX desktop PC)

So, I personally would avoid linking Panther Lake NUCs, Strix Halo mini-PCs or any such systems. If anything, I'd ask myself how to link fully loaded 4*64GB RAM B850 mobo desktop PCs together and if it makes sense in the first place / is it hassle-free in the usage or should I instead go bigger and get a server mobo. I think, I would again avoid linking and go with a (used) server mobo. Granted, at some point, one has to link servers together (because 1 server mobo is as big as it gets in one system).

If you really want to link, inform yourself if it's worth it first. A desktop PC is not going to be less power efficient and in prompt processing all systems will be equally power efficient, Panther Lake might be a full node more power efficient (~30%), but ask yourself if the total Wh consumed are going to be worth all the hassle and the higher price, as Panther Lake is a new thingy, and new things come often with an unreasonable price to performance.

HiAkuuSheeki

They better release the fing 64 GB version in US. They screwed us last year by not releasing 32 GB A14.

Quick Reply

Name:
Email:
Verification:
Please leave this box empty:
Shortcuts: ALT+S post or ALT+P preview