NotebookCHECK - Notebook Forum

English => Reviews => Topic started by: Redaktion on September 02, 2025, 12:02:31

Title: One of the best mini PCs of 2025 - AMD Ryzen AI Max+ 395 and Radeon 8060S reviewed with top performance in the GMKtec EVO-X2
Post by: Redaktion on September 02, 2025, 12:02:31
The GMKtec EVO-X2 combines high-end performance in the smallest of spaces: equipped with the AMD Ryzen AI Max+ 395, a whopping 64 GB of RAM and the Radeon RX 8060S, the mini PC appeals to gamers and creative professionals alike. In the test, we check how the mini PC performs in benchmarks, current games, and under maximum load - and whether it will be one of the top mini PCs in 2025.

https://www.notebookcheck.net/One-of-the-best-mini-PCs-of-2025-AMD-Ryzen-AI-Max-395-and-Radeon-8060S-reviewed-with-top-performance-in-the-GMKtec-EVO-X2.1099958.0.html
Title: Re: One of the best mini PCs of 2025 - AMD Ryzen AI Max+ 395 and Radeon 8060S reviewed with top perf
Post by: bjmarler on September 03, 2025, 04:45:08
Why is there a picture of a Beelink mini pc on the link to this article?
Title: Re: One of the best mini PCs of 2025 - AMD Ryzen AI Max+ 395 and Radeon 8060S reviewed with top perf
Post by: Think about this: on September 03, 2025, 14:53:36
LLMs
The "Strix Halo" APU is a 256-bit chip with a theoretical memory bandwidth of 256 GB/s (256-bit * 8000 MT/s / 1000 / 8) (and ~210 GB/s practically (expected)), comparable to an entry level quad-channel (64-bit * 4) workstation' memory bandwidth. A normal desktop PC is dual-channel at best. AMD specifically advertises "Strix Halo" for LLM inferencing. You can do the same with a normal desktop ATX sized PC with dual-channel RAM, the differences are:
       
Using the relatively expensive Strix Halo APU/chip, but giving it only 64 GB RAM is waste of silicon, because it's simply not enough for many LLMs (btw: the memory bandwidth will be the same, they are just using less dense RAM chips): Give it at least 96 GB RAM.

Questions to ask yourself:
   
PS: For what's it worth: A whole notebook with fast 64 GB dual-channel, LPDDR5X-7500 RAM, but with a slower iGPU than Strix Point (=slower prompt processing (pp) than on Strix Point), can be get for 1319 bucks: "Lenovo ThinkPad P16s G2 (AMD), Villi Black, Ryzen 7 PRO 7840U, 64GB RAM, 1TB SSD, .." (at least in this region) (the 8840U APU is just a refresh of the 7840U APU and Strix Point' RAM speeds are only slightly faster at 8000 MT/s and Zen 5 is also only slightly faster than Zen 4) (and for 1511 bucks with a better display).

The higher the memory bandwidth, the faster AI / LLMs will run on your PC.

Ok, but why/who would need 64 GB or 128 GB RAM all of a sudden?
The latest hype is AI/running LLMs. Download e.g. LM Studio (wrapper for llama.cpp) and try it out.
The difference to ChatGPT is that it is private (and, to be fair, not as good as the best that proprietary AI has to offer, but it's catching up quick (and if the open-weight LLMs are always ~1 year behind, does it matter? (the difference is actually getting closer: as per artificialanalysis.ai/?intelligence-tab=openWeights report)).

You can also download llama.cpp directly and the LLMs can be downloaded from huggingface.co.

Current SOTA model that fits into 64 GB RAM at q8 quant:
huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF
huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF

Current SOTA models that fit into 128 GB RAM (or fit using a decent quant, not the full FP16) are e.g.:
huggingface.co/unsloth/GLM-4.5-Air-GGUF
huggingface.co/unsloth/gpt-oss-120b-GGUF (from the makers of ChatGPT)

Better, but won't fit:
huggingface.co/unsloth/Qwen3-235B-A22B-Instruct-2507-GGUF (tho the 2-bit and 3-bit quants might)
huggingface.co/unsloth/GLM-4.5-GGUF
huggingface.co/unsloth/DeepSeek-V3.1-GGUF

and many more, including "thinking" variants, like huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507.

News and infos about AI / LLMs: reddit.com/r/LocalLLaMA/top/

NOTEBOOKCHECK, maybe write an article to explain to your readers about the AI LLM self-hosting ability or why 64 GB RAM or even 128 GB RAM are a thing all of a sudden now, because normally, 32 GB RAM are enough for most people.