News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

AMD Radeon 780M iGPU analysis - AMD's new RDNA-3 GPU takes on its competitors

Started by Redaktion, May 04, 2023, 19:18:30

Previous topic - Next topic

Redaktion

Last year's Radeon 680M was much faster than Intel's iGPU and AMD are looking to further extend this lead with the release of their new Radeon 780M. We have received our first device equipped with the new 780M and have put it through a variety of benchmarks - can these high expectations be met?

https://www.notebookcheck.net/AMD-Radeon-780M-iGPU-analysis-AMD-s-new-RDNA-3-GPU-takes-on-its-competitors.714019.0.html

NikoB

3D is a complete disgrace. EPIC FAIL #2

The author was too lazy and did not evaluate video decoders according to formalized criteria at all.

The fact that Intel 2022 loads iGPU by 44% is a complete nonsense. My old i5 8300H loads at 30-35%. It is already 5 years old.

In short, as I wrote - with such a shameful memory bandwidth, neither Intel nor AMD has anything to catch.

Until x86 starts pumping at least 400 GByte/s as M2 Max, we can not discuss the igpu further. Complete debacle from Apple. Vivat ARM!

Neenyah

Genuine question - what's the point of testing high resolutions in games when we know that they won't work well and when we know that this is an iGPU? Why not add productivity benchmarks instead of gaming ones? Because pretty much any game will work just fine on low enough resolution; I mean my UHD 620 is pushing CSGO at 135 fps without issues at external 720p screen yet I still use eGPU (3060) to play normally on FHD 240 Hz (300+ fps) 🤷�♂️ Can't tell about other games because I don't play anything else but CSGO but I know that I use my laptop for a lot of actual work and having such benchmarks would be really helpful for iGPUs in general.

Quote from: NikoB on May 04, 2023, 20:33:09The fact that Intel 2022 loads iGPU by 44% is a complete nonsense. My old i5 8300H loads at 30-35%. It is already 5 years old.
Yeah I don't get that either. I get 25-35% on my i7 8650 + UHD 620: imgur.com/IpP2K7C

arm dies here

Quote from: NikoB on May 04, 2023, 20:33:093D is a complete disgrace. EPIC FAIL #2

The author was too lazy and did not evaluate video decoders according to formalized criteria at all.

The fact that Intel 2022 loads iGPU by 44% is a complete nonsense. My old i5 8300H loads at 30-35%. It is already 5 years old.

In short, as I wrote - with such a shameful memory bandwidth, neither Intel nor AMD has anything to catch.

Until x86 starts pumping at least 400 GByte/s as M2 Max, we can not discuss the igpu further. Complete debacle from Apple. Vivat ARM!
ARM is absolute toast. Apple is the only company carrying that software-incompatible mess and all they need to do is fumble once (which is likely considering how many software and hardware failures they've had over a decade plus) and AMD and Intel will overpower with ease. With Meteor Lake getting a tiled GPU based on Arc graphics and AMD 8000 series looking to double the iGPU performance, I don't see ARM being an attractive option for ANY PC manufacturers when x86 outperforms in CPU, will outperform in GPU, allows for discrete GPU options, offers REPAIRABILITY of parts, the best software compatibility of any platform in existence, and 90% of the same battery life as ARM options. It's a no brainer why companies don't give a lick about ARM. Either go RISC-V or stay x86. Those are the only logical solutions.

Neenyah

Quote from: arm dies here on May 04, 2023, 21:05:52...I don't see ARM being an attractive option for ANY PC manufacturers when x86 outperforms in CPU, will outperform in GPU, allows for discrete GPU options, offers REPAIRABILITY of parts, the best software compatibility of any platform in existence, and 90% of the same battery life as ARM options. It's a no brainer why companies don't give a lick about ARM.
The most funny thing there is that Apple is oriented to content creators (because that's all they realistically can as everyone else keeps ignoring them as you noticed and said)...

Just to also get completely destroyed there by x86 for the same price; M2 and M2 Pro specifically, gets wiped around by just an i5 13600K + RTX 3050: youtube.com/watch?v=_D0K3-uZMyY
And you also get better performance per $ with the x86 😺 19:40 in the vid: "For the same price point PC, interestingly, provides better performance for the value."
 
Then you add i7 + RTX 3070 and even the most expensive Apple's offering can't come anywhere near that level of performance, lol. Yet people don't give up their praises about Apple doing some miracles, blablabla...


ddssavfaX

amd really screwed up with this generation...
phoenix is underwhelming, rdna 3 is poor, zen 4 desktop has it's fair share of issues...i guess this is a result of the pandemic keeping engineers at home.

David Howell

What an absolute mess. Even the "actual" 2023 chips on the latest note are no better than the 2022 ones.

Buying used or refurbished is looking incredibly appealing right now.

LL

Any change in hardware video decoding/encoding?

Can do HEVC 4:2:2 like Intel Quicksync?

LLM

These results do not align with other user's benchmarks on youtube using faster ram.

Not convinced until the possible ram bottleneck is analysed... / comparison to notebook with slower 4800MHz ram...

F-Frinkz

The Radeon 760M was potentially A better handheld Gaming Devices choice with it's 8 RDNA3 CUs that obviously had to be implemented using both shader arrays as a single shader array only provides for 6 RDNA3 CUs. So the 760M is probably implemented as 4CUs from each shader array with 2 CUs disabled on each shader Array.

Now Each Shader Array(See Block Diagram) houses 2 Render Back-Ends(RBE), with 8 ROPs for each Render Back-End for a total of 4 Render Back-Ends across 2 shader arrays and up to 32 ROPs total. But on the 760M only 16 of those ROPs are enabled for the 760M(One render Back-End per shader array). So the 760M's Render Back-End only has 1 RBE enabled per shader array and only 16 ROPs total and a very unbalanced ROPs to Shaders ratio there that really is not going to need that much shader compute there from the 8 enabled CUs on that 760M binning.

If the 760M had all 32 ROPs enabled across all 4 RBEs across 2 shader arrays then it's Pixel Fill Rates could have matched or even exceeded the 780M's Pixel Fill Rates as the 760M would have had fewer CUs and Shader cores enabled and used less power as a result for higher average sustained clock rates. And really the 760M with only 16 ROPs enabled is never going to see even the 8CUs worth of shader compute taxed to begin with, so unbalanced that Shader Cores to ROPs/RBE ratio is there. A Radeon 760M variant with 32-ROPs/4-RBEs enabled would have had a better Shader Cores to RBEs  ratio that could game every bit as well as the 780M as not that much shader compute is needed! And that's especially so for RDNA3 where each Shader core can dual issue FP32 instructions, so plentiful amounts of FP32 compute for 4 full RBEs and 32 total ROPs enabled even with only 8CUs enabled on that 760M SKU. 

Now the numbers of TMUs scales with the CU counts but the RBEs scales with the Shader Arrays enabled so the 760M with less TMUs enabled would have less Texel processing capabilities, but if the clocks are higher on average from having less Shaders enabled and less power draw for higher average sustained clock rates, then any Texel processing deficiency can at least be partially made up there. And if 32 ROPs where enabled on the 760M then if the average clocks rates sustained was higher then the 760M would have a higher average sustained pixel fill rate than even the 780M for gaming workloads.   


Tarnished

It looks like the laptop only have 512MB of VRAM allocated to the 780M GPU. If this is indeed the case, and Windows cannot pull more ram than that, it can cause dramatic loss in performance. I recommend setting the VRAM to 4GB and rerun the test to see if there is any difference.

tipoo

Not a very impressive gain, despite on paper doubling Tflops. If Meteor Lake's Tile GPU indeed doubles GPU performance over Rocket Lake, it looks like it's well beaten this. 


I wish there were more native games targeting Apples impressive IGP efforts too. They should just pay developers for native Metal 3 ports.


BAllen2782

The UHD630 in the 8th, 9th and 10th Gen i7's and up are pretty unique. HALF-LIFE 2, Episode 1 and 2 runs at 1080p60 maxed out. Along with games like Resident Evil 1 HD remake. Also, RE0 runs great at 1080p30 maxed out. This iGPU (UHD630) in my 8700K saved me when my VEGA64 departed. And my RADEON VII was sold for $2K+. Used the money to get a 6900XT TOXIC with a 3090 and a 55in LG OLED.

Yeshy

Can you test 780M at 25W and include TDP / power limits for efficiency comparisons?

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview