News:

Willkommen im Notebookcheck.com Forum! Hier können Sie über alle unsere Artikel und allgemein über notebookrelevante Dinge diskutieren. Viel Spass!

Main Menu

Now with 48 GB VRAM: Asus updates 14-inch gaming laptop with new version

Started by Redaktion, January 14, 2026, 17:37:34

Previous topic - Next topic

Redaktion

Asus has silently updated one of its 14-inch gaming laptops with a new version. Initially announced only last week, the AMD Strix Halo-powered laptop will now be capable of providing up to 48 GB VRAM to its Radeon 8060S iGPU, thanks to an optional 64 GB RAM variant.

https://www.notebookcheck.net/Now-with-48-GB-VRAM-Asus-updates-14-inch-gaming-laptop-with-new-version.1204358.0.html

Buggi


Alex Alderson


48, 64 GB RAM good

QuoteSpecifically, Asus has opted for the Ryzen AI Max+ 392, which AMD presented during CES 2026 with 12 cores, 24 threads and a Radeon 8060S iGPU.
The Strix Halo APU is supposedly quite expensive and I wonder if it can be sold cheaper by offering "only" 8 cores/16 threads, after all, even 6 cores for gaming is still enough and 8 offer barely an improvement/only in certain few games, and running LLMs is not bottelnecked by cores (beyond 4 threads barely gives an improvement and using too many threads even lowers the t/s number).

QuoteInitially, Asus stated that it would be offering the TUF Gaming A14 FA401EA with up to 32 GB of RAM. Silently, the company has updated the laptop with a second memory configuration, which can already be seen on Geekbench. As the image below shows, Asus will now offer up to 64 GB of RAM and a 2 TB SSD.
Smart decision to offer 48 and 64 GB RAM options: LLM self-hosting is a thing now (as can be seen e.g. how the reddit/r/localllama sub got a lot of subscribers (593K as of this writing) since Mar, 2023).

48, 64 GB RAM good

Here is an another example: AMD advertises local LLM / AI again (first time done so may with the Strix Halo introduction):
Quote from: techpowerup.com/review/amd-ai-bundle Jan 21st, 2026AMD is finally making local AI easy. With the new AI Bundle, you can run image generation and LLMs directly on your PC, no cloud, no subscriptions, no data leaving your system. We could even run a massive 120B parameter model on a small laptop, something impossible on any consumer GPU.
It is correct, one can run Gpt-Oss-120B with 64 GB RAM + dedicated GPU.

Ayymd

cdn.wccftech.com/wp-content/uploads/2026/01/Intel-shares-client-Mercury.jpg

It's kind of tragic, AMDs market share in notebook. Arm is already equaling them, and it's mostly probably just Apple there, that did it within the last 5 years (actually in 2, if you look at 2020 to 2022). If Nvidia, actually launch N1(X) this year Q2, I honestly don't see any point in strix halo anymore.

If you care about running LLMs locally, Nvidia has better software supported ecosystem. If you care about raw bandwidth / bus width and superior memory controller, M4 Max already beats strix halo here too.

Well..

Saw it.
QuoteIf you care about running LLMs locally, Nvidia has better software supported ecosystem. If you care about raw bandwidth / bus width and superior memory controller, M4 Max already beats strix halo here too.
For inferencing there's no difference, at least when it comes to text-to-text LLM-inferencing. For training, CUDA will be much more convenient and better supported (not that AMD wouldn't work, but there might be cases where only CUDA makes things possible).

Well, yes Strix Halo is a 256-bit wide APU (256 GB/s) and M4 Max is 512-bit (410 GB/s and 546 GB/s), but also goes only up to 128 GB RAM, so in this important sense it's not better. And, of course, a M4 Max APU device is like twice the price (at least?) (it even rhymes).

Quick Reply

Name:
Email:
Verification:
Please leave this box empty:
Shortcuts: ALT+S post or ALT+P preview