Nice article. Do you have any data points for Strix Halo at lower power? Based on the video from Digital Foundry on the GPD Win 5 it scales well down to comparable power settings, likely with higher efficiency.
Although the Strix Halo GPUs are even faster (but not more efficient), they operate at higher power limits. There are only a handful of corresponding devices on the market, which then are also quite expensive.
Basless comment as you did not compare the two at the same power levels. The fact that 8060s @ 80W was more efficient than Panther Lake at its default power level tells me that 8060S compared at 20/28/35/45W would be very efficient too.
Intel has just delivered +70% iGPU perf and efficiency out of thin air in a genuine thin and light form factor. Meanwhile, AMD fanboys are upset; the latest rumors say we'll be stuck with RDNA 3.5 until 2029! I'd be mad too!
Quote from: br83taylor on January 27, 2026, 03:25:57Do you have any data points for Strix Halo at lower power?
ThePhawx does. It does scale but only to a certain point. Halo does better at the higher power tiers while at the lower end panther lake does better.
Quote from: Terror Byte on January 27, 2026, 03:45:00The fact that 8060s @ 80W was more efficient than Panther Lake
Who uses or cares about performance at 80W? NUC users?
Quote from: opckieran on January 27, 2026, 06:23:36Meanwhile, AMD fanboys are upset; the latest rumors say we'll be stuck with RDNA 3.5 until 2029! I'd be mad too!
Currently going through the 5 stages of grief. Almost at the end now. I've just come to accept it.
From now on I will no longer comment on AMD gfx IP unless it's PlayStation or Exynos news. Not until after 2029, if their Radeon division hasn't been sold off to Sony by then.
Performance ratingStrix Halo memory interface is 256-bit lanes wide vs Panther Lake' 128-bit, but SH is only 65% faster.
EfficiencyWhen a power limit is set, it means the whole SOC or only the iGPU?
Not power limited - default power profileB390 vs 890M: B390 is 22% more power efficient (and is similar to the 130V, 140V, Strix Halo's 8050S / 8060S and a 4050 Laptop.
So, not much, except it's better than the 890M.
B390 vs 860M: B390 is 34% more power efficient (a 1 full node / generation improvement).
B390 vs 880M: B390 is 65% more power efficient (a 2 full nodes / generations improvement).
With different power limitsB390 (20W) vs 860M (15W TDP): B390 is
90% more energy efficient.
B390 (20W) vs 880M (15W TDP): B390 is
135% more energy efficient.
B390 vs 890M (15W TDP): B390 is 46% (35W),
61% (28W) and
75% (20W) more energy efficient.
So, the B390 wins against the 860M, 880M and 890M ranging from 22% (no power limit) to 138% (20W power limit).
Personally, I prefer a light weight laptop and as such a low power SoC of, say, 15W up to maybe 30W TDP. And this is where Panther Lake shines, which is very nice to see.
But if this iGPU comes only in expensive laptops, people will simply get a 4050 laptop. So, Intel, maybe reduce the number of cores or whatever to make a cheaper SoC, with the same iGPU performance.
To testI wonder how the B390 iGPU performs in path-tracing / full ray tracing. E.g. Cyberpunk 2077' Overdrive preset. Can you test this (performance and power efficiency)?
The big question is how well/stable do games run, maybe you can test that too (you did test the iGPU of the 1st gen Qualcomm Elite and many games didn't run/were unstable).
Quote from: Terror Byte on January 27, 2026, 03:45:00The fact that 8060s @ 80W was more efficient than Panther Lake at its default power level tells me that 8060S compared at 20/28/35/45W would be very efficient too.
With no limit the 8060S is only 5% more power efficient, so it's kinda within the margin of error or insignificant. But AFAIK, Panther Lake has 1 full node advantage (TSMC 3N for the GPU part?). Looks like something is reducing Panther Lake' power efficiency (Intel's CPU cores or something being clocked too high (would probably be the iGPU, because at low power Panther Lake's iGPU becomes really power efficient)).
Dave2D has similar to report: youtu.be/fDwt9AiItqU?t=142 ("Windows is Ruining New Laptops."), but the interesting thing he says is that Apple is taking laptop market share due to how Windows 11 is.
Quote from: opckieran on January 27, 2026, 06:23:36AMD fanboys are upset
As an IntelFan I'm also upset about because the best Intel iGPUs (10-12 graphics cores) are only available with soldered memory. Likewise AMD makes its Halo only available with soldered memory. Both use the same dirty tricks, when a product is too good and flexible for the customer, and then make it immutable. Enough reason to be mad about them both.
IntelFan, soldered LPDDR5X memory reaches 8000 MT/s on Strix Halo and 9600 MT/s in Panther Lake. Upgradable DDR5 SODIMM memory reaches 5600 MT/s on AMD and may be slightly more on INTEL. 9600/5600 = 71% higher memory bandwidth and potentially 71% higher iGPU performance: They are the best iGPUs because they have the necessary bandwidth.
So, to solve the MT/s issue with changeable RAM, CAMM2/LPCAMM2/SOCAMM2 exist. So, what I would complain about then, is that no LPCAMM2/SOCAMM2 laptops exist. (CAMM2 is for desktops, would be nice if there were any AM5 mobos with CAMM2, too)
The only issues I see, is how much soldered memory one has and how likely it is that memory goes bad, because if it does, a repair is going to be impossible.
For LLM self-hosting, editing, running virtual machines, etc., I personally wouldn't complain too much if it there is a 48 GB RAM option.
64 GB RAM is going to be mostly relevant for LLMs.
96 GB RAM would allow to host e.g Gpt-Oss-120B or GLM-4.5-Air (quant) with almost full context¹.
¹ (go to huggingface.co/spaces/oobabooga/accurate-gguf-vram-calculator, paste huggingface.co/unsloth/gpt-oss-120b-GGUF/blob/main/gpt-oss-120b-F16.gguf into the calculator, set the context to full and see the (V)RAM consumption)
Quote from: AMDfan on January 27, 2026, 09:40:24Quote from: opckieran on January 27, 2026, 06:23:36Meanwhile, AMD fanboys are upset; the latest rumors say we'll be stuck with RDNA 3.5 until 2029! I'd be mad too!
Currently going through the 5 stages of grief. Almost at the end now. I've just come to accept it.
From now on I will no longer comment on AMD gfx IP unless it's PlayStation or Exynos news. Not until after 2029, if their Radeon division hasn't been sold off to Sony by then.
At which point we all lose thanks to AMD's long-inflicted mismanagement of Radeon.
Nice, but Price?,
It's gonna be expensive if you want a decent laptop because that's the nature of the market. All the good laptops no matter what chip is inside are expensive.
Most of the larger efficiency gains seem to be under lighter loads like browsing. When gaming / heavier loads, it's diminishing returns probably like you said 22% more efficient. I think someone did a gaming battery life test and it was 2 hrs and 30 min. Compared to strix point which is closer to 2 hrs.
I think if you care about absolute efficiency, should wait Nvidia's N1. Because traditionally arm has always reigned supreme here. But then you've might to deal with compatibility issues. Wait and see if there's been any major improvement here or not, I guess.
Quote from: CAMM2, LPCAMM on January 27, 2026, 14:46:13So, to solve the MT/s issue with changeable RAM, CAMM2/LPCAMM2/SOCAMM2 exist. So, what I would complain about then, is that no LPCAMM2/SOCAMM2 laptops exist.
Yes, but I believe they are all in together (hardware vendors like Asus, Asrock, Dell, etc. and chip designers like Intel, AMD). They all know about it, but imo deliberately not use flexible/user-upgradable parts, because it's better for their own profit.
Better for their profit per sold item - worse considering avoided purchases by people requiring upgradeability.
IntelFan (and RobertJasiek), certainly, in the end, you're probably not wrong. But soldered 32 GB RAM is a bit of a too much of a all-in-it-together/planned obsolescence and makes it e-waste already for those who want to run a ~30B LLM at an 8-bit quant (30 GB). Or maybe even some other things.
@opckieran 15 hours ago
,,Intel has just delivered +70% iGPU..."
Yep, just a slight correction – TSMC, not Intel :)
Quote from: M2026 on January 27, 2026, 22:05:02@opckieran 15 hours ago
,,Intel has just delivered +70% iGPU..."
Yep, just a slight correction – TSMC, not Intel :)
What is the name of the TSMC iGPU involved? Or are you confusing who fabbed the iGPU chip with the actual product?
Why is there no full Render configuration Information on that Intel slide In the Form of Shaders:TMUs:ROPs? And how are folks to get the Theoretical maximum GTexel and GPixel processing rates to compare to other's iGPU designs that have the Shaders:TMUs:ROPs info published. I'm sure that just listing the RT and Matrix Math units counts is insufficient as there are still Raster Only Gaming Titles out there that still get played and if AI Up_scaling or Frame generation is not used because it's not really needed for a 5W gaming title!
Quote from: je07681 on January 28, 2026, 19:46:51Quote from: M2026 on January 27, 2026, 22:05:02@opckieran 15 hours ago
,,Intel has just delivered +70% iGPU..."
Yep, just a slight correction – TSMC, not Intel :)
What is the name of the TSMC iGPU involved? Or are you confusing who fabbed the iGPU chip with the actual product?
That's his whole shtick. He's a fanboy with a late '90s personal computing mindset who can't stand the fact that Intel delivered a good product, so he tries to deflect by crediting the manufacturing node over Intel's engineering here (which is pointless; delivered products are what count, not the node).
Funnier still, he has failed to consider the basic notion that by his ridiculous "logic", neither AMD nor nVidia are responsible for their own processors either; since they are both fabbed by TSMC as well, which... undermines his entire argument. Whoops! 🤣 The sign of a true "genius" folks...
"The sign of a true "genius" folks..."
You proved your "brilliant" logic by not understanding my point about foreigners, because even an idiot would understand that :)))
"Intel delivered a good product"
Do you mean a 2-4% performance increase (CPU) compared to the previous generation? At this rate, only our future grandchildren will experience a doubling of performance.
It actually gets worse. Medusa point is actually halving the their RDNA3.5 CU count to make space for the increased CPU cores. Forget keeping the performance the same, AMD is reverting it the other way, they're going backwards..
So I guess z3 extreme will me slower than z2 extreme, if it is based on Medusa point?!? :3
Dear lord. AMD. Wtf are you doing?!?!?
People will literally jump ship to arm as an alternative if you gonks don't wake the duck up out of your Cyberpsychosis. It's no wonder valve is working with fex for x86 emulation on arm.
Our only hope left for improvements to igpu now, besides intel is arm as an alternative, with Snapdragon X2 elite and n1x. AMD full on vacation for the next 2 years. Possibly more. Lol, "steam deck 2 rumours" rip to that being AMD anymore too.
Choom, AMD buys silicon wafers from TSMC and where is the most money for any chip, cut out of such a wafer, right now? In enterprise/AI boom. So, AMD is doing what not only their share holders want them to do. It would be retarded to miss out on the money just to make some normie gamers happy and go bankrupt afterwards.
So, don't worry, at some point AMD will release a competitive 128-bit APU (once they sold enough AI chips and made enough money to then be able so afford TSMC' 3N node ;-)) See how it works.
phoronix.com/review/intel-arc-b390-panther-lake-linux
On Linux the perf of the Arc B390 (a X7 358H) isn't great, but the MSI Prestige 14 Flip AI Plus laptop is also using only LPDDR5X-8533, not 9600 MT/s, memory. A X9 388H using 9600 MT/s memory scores like 16% to 17% higher (device is a ASUS Zenbook Duo (UX8407AA)) than the 358H, which correlate to the increased memory bandwidth of 9600 MT/s memory: 1.125 = 9600/8533.
Interestingly, all the Arc B390 implementations score only averagely in terms of FPS per laptop weight:
Quote from: FPS per weight on February 16, 2026, 11:20:12In terms of FPS per weight, the Asus ZenBook Duo UX8407AA scores 7190 in 2560x1440 Time Spy Graphics and weights 1.7kg. 7190/1700 = 4.23.
Go to notebookcheck.net/Benchmarks-and-Test-Results.142793.0.html and compare for yourself (import into a spreadsheet and add a new column, which divides the Time Spy Graphics score by each laptop's weight).
Just for comparison:
- Highest score: Asus ROG Zephyrus G14 GA402XY scores 10.58 and a Razer Blade 16 2025 RTX 5090 scores 10.57. This is 2.50 times the Duo's score. Of course, 2 screens add weight (and price).
- Highest 4050 Laptop score: 4.77 (Lenovo Yoga Pro 9-14IRP G8)
- Lowest 4050 Laptop score: 2.85 (Dell XPS 14 2024 OLED)
- Highest 4060 Laptop score: 7.33 (Asus ROG Flow Z13 GZ301V
- Lowest 4060 Laptop score: 3.78 (MSI Cyborg 15 A12VF
- Average 4060 Laptop scores: 6.01 (Lenovo Legion Slim 5 14APH8) and 7.12 (Asus TUF Gaming A14 FA401WV-WB94)
If one considers a normal, non duo screen, Arc B390 laptop, the FPS per weight score is much higher: notebookcheck.net/Asus-ExpertBook-Ultra-review-One-helluva-debut-for-Intel-Panther-Lake-X7.1209366.0.html:
6.54 = 7270/1111 ("1.111 kg").
Quote from: Choom on February 01, 2026, 12:57:07It actually gets worse. Medusa point is actually halving the their RDNA3.5 CU count to make space for the increased CPU cores. Forget keeping the performance the same, AMD is reverting it the other way, they're going backwards..
Dear lord. AMD. Wtf are you doing?!?!?
I'm also speculating, that both INTEL and AMD, don't want their mainstream iGPUs be stronger than a GTX 1050 Ti ...
Which could explain why mainstream Panther Lake iGPU only gets 4-Xe (instead of 8-Xe as in Arrow Lake),
and mainstream AMD Medusa Point iGPU will only get 8 CUs iGPU (instead of 16 CUs as in Strix Point).
And they probably want everything above these mainstream specs sell as a "premium chips":
- Which would be the Panther Lake 12-Xe iGPU, which already comes at premium price only.
- And while AMD does have a "Halo" chip, it's too expensive to produce, and probably decided to go similar ways as Intel. Which means they'll make a cheaper chip become their premium chip, and which will probably be a Medusa Point 16-CUs iGPU, sold at similar premium price as Intel Panther Lake.
Quote from: Prassel on February 16, 2026, 14:46:51Quote from: Choom on February 01, 2026, 12:57:07It actually gets worse. Medusa point is actually halving the their RDNA3.5 CU count to make space for the increased CPU cores. Forget keeping the performance the same, AMD is reverting it the other way, they're going backwards..
Dear lord. AMD. Wtf are you doing?!?!?
I'm also speculating, that both INTEL and AMD, don't want their mainstream iGPUs be stronger than a GTX 1050 Ti ...
Which could explain why mainstream Panther Lake iGPU only gets 4-Xe (instead of 8-Xe as in Arrow Lake),
and mainstream AMD Medusa Point iGPU will only get 8 CUs iGPU (instead of 16 CUs as in Strix Point).
And they probably want everything above these mainstream specs sell as a "premium chips":
- Which would be the Panther Lake 12-Xe iGPU, which already comes at premium price only.
- And while AMD does have a "Halo" chip, it's too expensive to produce, and probably decided to go similar ways as Intel. Which means they'll make a cheaper chip become their premium chip, and which will probably be a Medusa Point 16-CUs iGPU, sold at similar premium price as Intel Panther Lake.
See it in a positive way of not buying them: Medusa Halo and Strix Halo are neither here, nor there: Strix Halo is 4060 Laptop performance (=slow prompt processing) (I'm comparing for LLM performance, because this is how AMD is advertising Strix Halo). Furthermore, when it comes to gaming, both are only RDNA3.5 (Medusa Halo is rumored to be RDNA3.5, anyway) and as such, don't have the ML hardware cores of RDNA4 for proper DLSS-type of upscaling and 4060 Laptop perfs isn't that great either. Another "positive way": Strix Halo' memory bandwidth is only 256 GB/s and it only supports up to 128 GB RAM, which is even worse for fitting proper, smart, LLMs or their 4-bit (or more) quants. 256 GB RAM are required at the very least for good LLMs.
But what's the point of looking at TimeSpy score to weight ratio? I don't get it.
Performance on Linux I can pretty much guarantee will improve in due time. So not worried about that.
Quote from: in LLM terms on February 16, 2026, 15:21:51Medusa Halo is rumored to be RDNA3.5
Initially it was thought to be but the common consensus now is most definitely that it'll be RDNA5.
Quote from: Small correction on February 16, 2026, 17:37:34Quote from: in LLM terms on February 16, 2026, 15:21:51Medusa Halo is rumored to be RDNA3.5
Initially it was thought to be but the common consensus now is most definitely that it'll be RDNA5.
This sounds like the common consensus is very sure of it. RDNA5 even? Would be nice. I want Medusa Halo to have (up to) 256 GB RAM.
Qwen3.5-397B-A17B is out and the 4-bit Q4_K_M quant requires 241 GB RAM. Same goes for many recently released open weight LLMs, all require 256 GB RAM to fit at least a usable quant.
Just another one for example: huggingface.co/unsloth/GLM-5-GGUF:
Quote from: unsloth.ai/docs/models/glm-5:The full 744B parameter (40B active) model has a 200K context window and was pre-trained on 28.5T tokens. The full GLM-5 model requires 1.65TB of disk space, while the Unsloth Dynamic 2-bit GGUF reduces the size to 241GB (-85%), and dynamic 1-bit is 176GB (-89%): GLM-5-GGUF
YT/Just Josh tested the Asus ExpertBook Ultra Panther Lake laptop (youtu.be/jduWl1J_4lQ?t=637), but what is up with the 1% FPS lows? He even points it out. Looking at the results, all Panther Lake Arc B390 are affected:
Cyberpunk 2077 (1920x1200, High settings):
ProArt PX13 (RTX 4060 | 95W): 91 FPS 1% lows
LOQ (RTX 5050 | 100W): 53 FPS 1% lows
ExpertBook Ultra (Intel Arc B390): 44 FPS 1% lows
Zenbook Duo (Intel Arc B390): 45 FPS 1% lows
XPS 14 (Intel Arc B390): 36 FPS 1% lows
The LOQ doesn't look too good either.
Now I wonder about Panther Lake' 10% lows, too.