NotebookCHECK - Notebook Forum

English => Reviews => Topic started by: Redaktion on July 26, 2025, 11:08:58

Title: Bosgame M4 Neo test - the affordable alternative to expensive mini PCs
Post by: Redaktion on July 26, 2025, 11:08:58
Small, powerful and flexible - with the AMD Ryzen 7 7840HS, Radeon 780M iGPU and OCuLink connection, the Bosgame M4 Neo packs a lot of performance into a mini format. But how does the compact PC perform in everyday use? We have tested the inexpensive powerhouse in detail.

https://www.notebookcheck.net/Bosgame-M4-Neo-test-the-affordable-alternative-to-expensive-mini-PCs.1068993.0.html
Title: Re: Bosgame M4 Neo test - the affordable alternative to expensive mini PCs
Post by: MD on September 28, 2025, 03:43:19
Is there ever a benefit to changing the blower fan that sit above the CPU? The high-frequency sound is definitely annyoing, but I'm not sure if there is a better fan I could replace it with that doesn't make that noise.

Also, unsure if changing thermal paste would see the temps - and therefore the fan noise - drop.
Title: Re: Bosgame M4 Neo test - the affordable alternative to expensive mini PCs
Post by: It's dual-channel on September 28, 2025, 08:42:59
QuoteQuad-Channel
It's dual-channel (2*64bit) (as are all consumer PCs, if we define "consumer"=128bit), we all wish it was quad-channel RAM, but quad-channel RAM, as of this writing, is currently only reserved to entry level workstations, Apple M Pro and the new Strix Halo APU, which unfortunately uses the old iGPU RDNA 3.5 architecture, instead of RDNA 4: booo.

If it was quad-channel, "61143 MB/s" would be roughly doubled, you can even calculate it:
128bit * 5600MT/s / 1000 / 8 = 89.6 GB/s (theoretically) * 0.7 = 62.72 GB/s (practically)
And now same calculation with 4*64bit aka quad-channel:
256bit * 5600MT/s / 1000 / 8 = 256 GB/s (theoretically) * 0.7 = 179.2 GB/s (practically).

The higher the GB/s value, the faster your local AI / LLMs will run.
LLMs:
huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF
huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF
huggingface.co/openai/gpt-oss-20b (from the ChatGPT makers) (Downloads last month 6,749,481 and you never heard of it?) (they also have gpt-oss-120b)
huggingface.co/unsloth/gpt-oss-20b-GGUF (same but the GGUF format allows to run not only on GPU, but if you don't have enough VRAM, to also partially or fully offload to RAM)