News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Promising AMD Ryzen 7 7840HS GPU performance could be more than 30% better vs last-gen Radeon 680M

Started by Redaktion, March 03, 2023, 12:43:00

Previous topic - Next topic

Redaktion

The 8-core Ryzen 7 7840HS is a part of the trio of Phoenix-HS APUs that AMD revealed in January. Moore's Law Is Dead has now leaked the 3DMark Time Spy performance of the chip and the results look very encouraging if accurate.

https://www.notebookcheck.net/Promising-AMD-Ryzen-7-7840HS-GPU-performance-could-be-more-than-30-better-vs-last-gen-Radeon-680M.698724.0.html


Amendoza


sorin

Some people are still comparing the 780M to the GTX1650, but the sad truth is it's a little bit faster than a desktop GTX1050ti. RDNA3 is a sidegrade, not an upgrade.

david salsero

What everyone expects are the AMD ZEN 4 Phoenix = DDR5 + RDNA 3 + USB 4.0 + HDMI 2.1 + artificial intelligence to exploit chatGPT to the fullest. AI With XDNA architecture developed by Xilinx and everything at 4nm vs 10nm from Intel
Intel has fallen asleep or rather hibernated. Intel has been 4 long years with their unevolved integrated Xe graphics that you can only play minesweeper with, while AMD has gone from Zen 2 with Vega 8 --> Zen 3+ with RDNA2 and now with Zen 4 --> RDN3+ AI I just read The graphics of the Core 13 disappoints, it does not surpass the AMD Vega 8.

AMD Phoenix CPUs will surpass the Apple M2 in artificial intelligence in performance and efficiency and that is what the portable public expects with ZEN 4 Phoenix if you want to know more simply write AMD ZEN 4 Phoenix in your search engine

88&88

QuoteSome people are still comparing the 780M to the GTX1650, but the sad truth is it's a little bit faster than a desktop GTX1050ti. RDNA3 is a sidegrade, not an upgrade.

fortunately last summer were shared leaks of a phantomatic Phoenix as 3060level performance running at 60W.
Nowadays if it's on same level of GTX 1050Ti/1060 3Gb you'are lucky, or better you'll get that performance OC it.
When they will add quad channel memory and at leeast 16CU with RDNA4 you'll get GTX980Ti level, but too far from future days where ARM+RISCV will dominate the market.

first I'll wait for ETa prime or others youtubers tests before buy a minipc with phoenix, if it's good maybe i'll take it, otherwise I'll wait for ARM+Samsung or mediatek processor for desktops. Exynos 2800 can be a good choice if it's cheap or inside an elegant AIO like M8 monitor.

heffeque

Quote from: sorin on March 03, 2023, 16:45:21Some people are still comparing the 780M to the GTX1650, but the sad truth is it's a little bit faster than a desktop GTX1050ti. RDNA3 is a sidegrade, not an upgrade.
We were all expecting 4080ti performance at a 10W package. AMD disappoints.

Anonymousgg

It falls short of +50-54% performance/watt claims for RDNA3 in general, but it's a decent improvement on top of the already good Radeon 680M. Although it might be heavily dependent on the memory speed used, based on some of those worse benchmark leaks.

If it's enough to push games that were getting around 45-50 FPS in 1080p up to 60 FPS, then that's a good result. That's the golden line where many people wouldn't care about getting more performance, other than to keep it locked at 60 with no dips.

Strix Point could be very interesting. It's confirmed RDNA3+, whatever that entails. If AMD wants any better graphics than 780M, they might need something drastic like more memory channels or 3D cache. The CPU side will be killer, with Zen 5, possibly alongside Zen 4c and totaling more than 8 cores. The CPU performance of recent APUs is already obscenely high.

Quote from: david salsero on March 03, 2023, 17:11:03What everyone expects are the AMD ZEN 4 Phoenix = DDR5 + RDNA 3 + USB 4.0 + HDMI 2.1 + artificial intelligence to exploit chatGPT to the fullest. AI With XDNA architecture developed by Xilinx and everything at 4nm vs 10nm from Intel

AMD Phoenix CPUs will surpass the Apple M2 in artificial intelligence in performance and efficiency and that is what the portable public expects with ZEN 4 Phoenix if you want to know more simply write AMD ZEN 4 Phoenix in your search engine

I will have to see the XDNA accelerator in Phoenix doing something useful to believe it. Intel might be fumbling but AMD has fumbled AI in relation to Nvidia.

If I see Phoenix running Stable Diffusion "fast" on system memory, even if it's worse than discrete desktop GPUs, then I'll be impressed and looking forward to generational improvements. BTW, you are not going to be running ChatGPT from so-called "OpenAI", actual open source is where it's at. I guess you might have meant some client-side AI operations that build upon server-side ChatGPT responses, but whatever.

NikoB

This is not the first time AMD has been scammed. The Zen3 had faster integrated video chips than the Zen3+. This is a well known fact.

AMD specifically does not produce scarce models of processors in order to keep inflated prices.

As a result, AMD is the king of declaring paper versions of processors that do not exist in mass models. Already the 3rd year.

Intel began to take back their market share in desktop and especially mobile processors. Keep it up!

Hotz

Quote... Moving to MLID's leak, the Ryzen 7 7840HS scores equal to 2,590 points in the 3DMark Time Spy GPU test.

... the sample Ryzen 7 7840HS in question was allegedly limited to 25 W.

... the Radeon 680M managed 1,940 points when limited to 25 W, making the Radeon 780M more than 30% faster.

Was this tested with the same RAM speeds? Because if not, it's a charade.

We've had similar leaks before, and there it was only because of the faster RAM.
 
To be honest, I believe in the above case it's the same. Prove me wrong...

NikoB

The main problem with all AMD embeddings is the terribly slow memory, which loses 1.5 times in bandwidth to Intel on average and up to 2 times in laptops at peak. And losing 8-9 times to the memory controller M2 Max.

400GByte/s from Apple in M2 Max vs. miserable 80GByte/s for the top Raptor i9...

Think about this monstrous difference ... Intel / AMD outright lost to Apple. And even NVidia now looks shameful against the backdrop of Apple's progress in Arm...

88&88

QuoteThe main problem with all AMD embeddings is the terribly slow memory, which loses 1.5 times in bandwidth to Intel on average and up to 2 times in laptops at peak. And losing 8-9 times to the memory controller M2 Max.

400GByte/s from Apple in M2 Max vs. miserable 80GByte/s for the top Raptor i9...

Think about this monstrous difference ... Intel / AMD outright lost to Apple. And even NVidia now looks shameful against the backdrop of Apple's progress in Arm...

AMD via Su's interview already spoiled an apu with quad channel bandwith, maybe we'll see in Strix point? uhm... knowing amd i don't think so, perhaps will use 3d cache on their ext generation, and X3D vs traditional RyuZen4 showed early 3x better performance in comparison 2CU of not 3d cache.
So Strix point is promising APU, if they'll use even quad channels memory, oh yeah, nobody will buy anymore GPUs 😁, but amd to create this type of minipc apu need years, beecause commercial/economic reason, just to not cannibalize PS5.

Infact in 2025 maybe will be out PS6...
But the road toward Mi300 is clear.

NikoB

The increased L3 cache in Intel/AMD processors is just a crutch like the SLC cache in an SSD. Outside it, the read / write speed immediately drops by an order of magnitude. This is never an option.

They have 2 options:
1. Placing on the SoC chip 512-1024 bit HBM memory in the amount of 8-10GB as a dedicated VRAM.
But in this case, the whole system, taking into account the transition to pci-e 5.0, still turns out to be a bottleneck for a bunch of high-performance devices in terms of the speed of the memory controller.

2. 512+ bit memory controller like Apple in M2 Max.
In the this case, a minimum of 8 SO-DIMM channels will be required. Or increasing the channel width of a single SO-DIMM to 256 bits or more.

All this greatly complicates the motherboard of laptops and PCs. And it is especially difficult to do it qualitatively and reliably with the memory that is inserted.

And here serialization of the interface will not help, as was the case with the transition from IDE to SATA or from DVI to DP. Memory has another problem - a gradual increase in latency. Look at what happens even with the L3 cache in modern processors - it's a shame! L3 cache latency has almost doubled in terms of access time! Caches and memory are getting faster only in linear read/write/copy operations, not in atomic ones. Here they already significantly lose to the old DDR3 memory. And L1..L3 caches of old processors.

As a result, we only increase the linear speed of memory, but not random access. It's getting worse and worse...

That's why I'm writing - silicone chips have no future. The entire current industry is already at a complete standstill. It is necessary to switch to photonics and some other schemes for implementing RAM, which makes it possible to reduce latency by several times and at the same time increase linear speeds by an order of magnitude right now. Apple did this but at the cost of desoldering the memory.

If x86 comes to this, it's a problem in terms of system memory upgrades. When more is needed...

Folie

Quote from: david salsero on March 03, 2023, 17:11:03AMD ZEN 4 Phoenix
One finds in every corner that the CPU has security gaps for malware, its AI makes smart phone calls and thus competes with China.

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview