If I had to buy a LAPTOP or PC, I'd wait for what AMD is going to release with AMD HALO and AMD MINI.
AMD Halo and Halo Mini are coming to destroy NVIDIA and Intel in gaming laptops with up to 26 cores, 48-core GPUs, and 384-bit LPDDR6.
AMD will make its Medusa Halo APUs the low-end graphics card for gaming laptops, a bomb with a 50% boost in CPU and up to 20% boost in GPU.
Although GorgonPoint will be released before all of these, in January 2026.
You the bot never have to buy anything.
@Notebookcheck Could you double check which version of Cyberpunk 2077 you're running? There was a recent patch, 2.31, from about a month ago which fixed a bug "Mac-specific Fixed an issue where changing graphics presets on Mac set Screen Space Reflections to a higher setting compared to the equivalent presets on PC." This had a dramatic effect on non-ray-traced performance as reflections were being set 1 level higher, so Ultra graphics settings actually were at "psycho" levels of reflections. The numbers you report are similar to Ars Technica's numbers who also tested the native port CP2077 on older Macs - and their performance numbers also seemed to line up with those from before the bug fix. Might be wrong, but it would be good to check just in case.
Quote from: RobertJasiek on Yesterday at 10:56:08You the bot never have to buy anything.
Maybe it's hallucinating?
Struggle to understand how AMD will "destroy" anything when they can't even supply any z2 extreme chips to 2 major OEMs (Lenovo and Asus). Both their handhelds are out of stock for 2 months in some regions.
And that chip is a much smaller and less expensive die. Forget about strix halo and any of its successors which are far larger and more expensive.
Power efficiencyQuoteHowever, Apple trades this performance advantage with increased power consumption. Although the single-core efficiency remains roughly the same as the M4 generation (and still significantly better than the rest of the competition), the increased consumption also leads to slightly shorter battery runtimes.
Indeed, no power efficiency improvement on the CPU side is a big bummer, but the GPU power efficiency over M4 increased by 47%, which is very good.
Local AI / LLMsThe memory bandwidth has increased by 27.5% (1.275 = 153 GB/s / 120 GB/s), which gives roughly 27.5% faster LLM token generation and the GPU graphics performance has increased, depending what you measure, by 30% to 56% (CP2077, no ray-tracing) to 190% (2.9 times) (CP2077 with a ray-tracing heavy preset), which, from what I have seen so far, gives 45% faster LLM prompt processing. Both solid gen-over-gen increases.
Other AI workloads will run much faster still: en.wikipedia.org/wiki/Apple_M5: "This architecture delivers over 4x the peak GPU compute performance for AI compared to M4, and over 6x peak GPU compute for AI performance compared to M1." Go you YouTube, there are already vids testing and confirming this. But it depends if the marketing is based on FP16 vs FP8 vs FP4 comparison, like Nvidia's Blackwell one, where they are simply comparing FP8 vs FP4 and says that Blackwell is 2 times faster. Of course FP4's half precision would be 2x faster than FP8, you are comparing apples to oranges, arguing in bad faith and deceiving your customers, NGREEDIA).
The 65% faster SSD read time of the M5 will give roughly 65% (1.65 = 5.1 GB/s (M5) / 3.1 GB/s (M4)) faster LLM loading time.
"M5 supports up to 32 GB of memory capacity." This is the elephant in the room bummer. A M5 MacBook Air with only up to 32 GB of unified memory is the biggest bummer. With MoE LLMs being a thing since quite some time, which require more RAM in exchange for faster token generation (basically specifically designed for running in unified memory RAM-speed type of environments) vs dense LLM architectures (designed for GPU's VRAM-speeds), an increase in memory would have been very nice. With the M4, Apple increased the unified memory capacity to 32 GB, too bad there's no further memory density increase in M5 (memory chips with the required density do exist, so it's simply an artificial marketing segmentation limitation). So, if you have a 32 GB M4 MB Air already and would love to run bigger LLMs, there's unfortunately no reason to upgrade to the M5 MB Air in this reagard. (en.wikipedia.org/wiki/MacBook_Pro_(Apple_silicon))
How would one know that the denser flash chips exist? Well, for one, AMD's Strix Halo has a 256-bit memory bus width and supports up to 128 GB of unified memory. The M1, M2, M3, M4 and M5 all have a 128-bit memory bus width, so, accordingly, they could spport 64 GB of unified memory. (en.wikipedia.org/wiki/Apple_silicon#M-series_SoCs)
Especially with the M5' over M4' increase of 27.5% higher memory bandwidth, a unified memory size increase to at least 40 GB or better, at least 48 GB, would allow for it without any major disadvantages (there wouldn't be any disadvantages anyway, as more capacity always wins [if nothing else decreases, like the bandwidth, which it wouldn't]).
So, Apple, 64 GB of unified memory in the M6 MacBook Air, right? (or at least 48 GB, if you don't want to give us too much memory and to not cannibalize your MacBook Pro offerings)
RestQuoteWe are curious to see what the M5 Pro and M5 Max SoCs will achieve
Well, you and we all know it already: It will be linearly scaled up like with the previous M* -> M* Pro -> M* Max -> M* Ultra ones. So a M4 Pro to M5 Pro will roughly retain the mentioned % increases, which are very solid, especially the GPU and the 27.5% higher memory increase, which directly translates to the LLM token generation speed.
No Wi-Fi 7 for MacBook Pro? (Wi-Fi 7 reduces latency, increases stability, etc., it's not simply about faster download times anymore, as WiFi 6E is pretty fast already)
Quote384-bit LPDDR6
From a local LLM perspective: This would be nice indeed, because even a 256-bit memory bus width and 128 GB of unified memory are not enough for the bigger and better MoE LLMs. A 384-bit memory bus width would mean a 50% increase in memory capacity to 192 GB of unified memory [when using the same memory density flash chips, obv.] [and this density to bit bus width is fine] and this is the absolute minimum where it starts to get interesting.
I think Gorgon Point will be a mere rename (aka refresh), like Hawk Point to Phoenix Point.
Well, 32 GB of RAM maximum yet Apple is trying to position this baseline chip as an AI contender - who are they kidding? Their marketing department is out of fresh ideas apparently.
It's fast for general computing, it should be good enough.
Enough with the AI-everywhere joke already.