News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

AMD launches 16C/32T Ryzen 9 7945HX3D APU: A Ryzen 9 7950HX3D for mobile with 128 MB L3 cache that makes its debut with the Asus Strix Scar 17 X3D

Started by Redaktion, July 28, 2023, 08:52:09

Previous topic - Next topic

Redaktion

AMD is introducing its 3D V-cache technology to gaming laptops with the launch of the Ryzen 9 7945HX3D.  The Ryzen 9 7945HX3D is similar to the Ryzen 9 7945HHX but with a 200 MHz lower base clock, a total of 128 MB of L3 cache, and a 55 W+ TDP envelope. The Ryzen 9 7945HX3D will debut together with the Asus Strix Scar 17 X3D later in August.

https://www.notebookcheck.net/AMD-launches-16C-32T-Ryzen-9-7945HX3D-APU-A-Ryzen-9-7950HX3D-for-mobile-with-128-MB-L3-cache-that-makes-its-debut-with-the-Asus-Strix-Scar-17-X3D.736107.0.html

NikoB

As I wrote many times (and this is a small part):
www.notebookchat.com/index.php?msg=544000
www.notebookchat.com/index.php?msg=544271
All these shameful L3 caches (and it has already reached L4) are just meaningless crutches within the framework of the general impasse on x86 with the bandwidth of RAM in the consumer segment (as opposed to the server market, where they switched to HBM a long time ago).

Both Intel and AMD are actually squeezing money out of a dead-end architecture with 2-channel slow memory that is morally obsolete for 5-7 years.

IT professionals understand this. And that there is practically no significant increase in performance in laptops over the past 5 years, if you limit TDP to 35W, as before, and this is the maximum sane level for laptops.

What will happen if US/EU legally limit the consumption of laptops to 100W in total? As scammers in Intel/NVidia, then will sell new series at a minimal difference in performance from the old one? AMD will certainly be easier, but it is at an impasse.

Today's insanity with 250-300W laptops has already reached a dead end. 500W laptops are impossible - you will have to be content with the same 250-300W.

And as TSMC recently wrote in a report - you shouldn't expect major improvements with "2nm" as predicted by experts, including myself - silicon (and current variants of Von Neumann architecture) are approaching a technological dead end. And then what?

Only Apple today offers a truly balanced architecture (albeit overpriced) on Arm with 5-6 times the memory bandwidth of the best x86 solutions in the consumer segment. Their processors simply don't need the L3 cache. their memory controller is as fast as the L3 cache in x86 processors. The difference is that Apple has this speed available on a memory capacity of up to 96GB, and on the infamous x86 for a maximum of 192MB of L3 cache. Feel the difference if you have mastered the school arithmetic course. =)

Puiu

Quote from: NikoB on July 28, 2023, 12:20:26As I wrote many times (and this is a small part):
www.notebookchat.com/index.php?msg=544000
www.notebookchat.com/index.php?msg=544271
All these shameful L3 caches (and it has already reached L4) are just meaningless crutches within the framework of the general impasse on x86 with the bandwidth of RAM in the consumer segment (as opposed to the server market, where they switched to HBM a long time ago).

Both Intel and AMD are actually squeezing money out of a dead-end architecture with 2-channel slow memory that is morally obsolete for 5-7 years.

IT professionals understand this. And that there is practically no significant increase in performance in laptops over the past 5 years, if you limit TDP to 35W, as before, and this is the maximum sane level for laptops.

What will happen if US/EU legally limit the consumption of laptops to 100W in total? As scammers in Intel/NVidia, then will sell new series at a minimal difference in performance from the old one? AMD will certainly be easier, but it is at an impasse.

Today's insanity with 250-300W laptops has already reached a dead end. 500W laptops are impossible - you will have to be content with the same 250-300W.

And as TSMC recently wrote in a report - you shouldn't expect major improvements with "2nm" as predicted by experts, including myself - silicon (and current variants of Von Neumann architecture) are approaching a technological dead end. And then what?

Only Apple today offers a truly balanced architecture (albeit overpriced) on Arm with 5-6 times the memory bandwidth of the best x86 solutions in the consumer segment. Their processors simply don't need the L3 cache. their memory controller is as fast as the L3 cache in x86 processors. The difference is that Apple has this speed available on a memory capacity of up to 96GB, and on the infamous x86 for a maximum of 192MB of L3 cache. Feel the difference if you have mastered the school arithmetic course. =)


You are so wrong that it isn't even funny. Apple's approach may be great for power usage (the smaller process node helps a lot too), but it has a ton of restrictions and it looses in many metrics.

Without the specialised code written specifically for their platform, the M1/M2 would be way behind x86 PCs (and still is way behind in many areas). It's the same with consoles. There the CPU delivers a lot more performance than on a regular PC.

NikoB

I am not mistaken in anything, because I am an IT professional and I understand the bottlenecks of each architecture at the system level well.

x86 is a living corpse for today. But in general, silicon is already dead.

TDP race is no longer possible.

What will chipmakers do next to sell us a product with almost the same performance? After all, it is physically impossible to further increase consumption either in desktops, or even more so in laptops.

And most importantly, it amuses me how the "greens" in this thread put their heads in their asses and keep quiet from laptops that eat 250W and PCs with a consumption of 500W or more. But there are millions, tens of millions of them.

But plasmas were banned with a consumption of 500W+. Why such duplicity in approaches?

Or remember the greed of a stupid crowd with crypto mining - a waste of the planet's resources and electricity, heaps of hardware for a Ponzi pyramid of greedy businessmen. Again - where were the "greens" and the authorities, allegedly caring about "ecology"?

Where corruption and lobbies are strong and/or where there is no alternative, the "Greens"/authorities keep quiet. =)


Neenyah

Ahh yes, another sheet of NikoB(itching) about dead x86 and parroting about Apple's supremacy.

But in the real world we don't have just words of praise but proofs about how that is completely wrong and that Apple is comically behind everyone, here by the LTT:


"Apple fans, start typing your angry comments now..."

( youtu.be/buLyy7x2dcQ )

Cringe

@NikoB:

Could you give an actual real life example where having additional memory bandwidth / faster memory controllers on x86 matters? Not synthetic benchmarks but real applications people use.

Cringe

having a 1000 core cpu would be nice as well, but there's barely any apps which scale past 12+ besides like encoding / rendering and niche stuff so, yea..

NikoB

Quote from: Cringe on July 28, 2023, 15:16:20Could you give an actual real life example where having additional memory bandwidth / faster memory controllers on x86 matters? Not synthetic benchmarks but real applications people use.
System memory is shared by all devices and software/OS for its own purposes.

It has been empirically proven that if a device takes up more than 10% of the system memory bandwidth, other devices, software/OS start having problems with lags.

For example, 4k monitors connected to the integrated video card, when working with single-channel memory, lead to lags and microfreezes.

Intel strongly discourages in its datasheets from attempting to use integrated video decoders with single-channel memory.

Here you just need to have a professional education like mine in order to understand the reasons. A simple ordinary man in the street will still not understand the problem until he encounters them firsthand.

Embedded video chips are slow precisely because of the low (shamefully low) bandwidth of x86 platforms.

The general imbalance, taking into account already 32 streams, is obvious. pci-e 5.0 in the 45 series of AMD processors is essentially a useless technology for which an illiterate layman overpays for nothing.

It can be said for sure that the bandwidth of RAM should be 10 times higher than the requirements of the main consumer devices so that the software / OS does not lag.

Thus, we automatically come to a minimum of 300Gb/s for today for the x86 platform vs real 50-80Gb/s now.

Apple has already solved this problem radically with their excellent 400Gb/s with 512 bit memory controller. x86 camp trails pathetically at the end, swallowing dust...

I described everything above on the links, read.

Previously (until 2010 approximately), ordinary people, fortunately for professionals, paid for the powerful progress in PC with their massive purchases (they simply had no choice), but now ordinary people prefer to spend insane amounts on smartphones and the PC / laptop market is essentially oblivious , because there is no longer a mass sponsor of progress in them. And this is sad for professionals who got all the best much cheaper and faster than now ...

nikobisnotok

Quote from: NikoB on July 28, 2023, 15:36:02
Quote from: Cringe on July 28, 2023, 15:16:20Could you give an actual real life example where having additional memory bandwidth / faster memory controllers on x86 matters? Not synthetic benchmarks but real applications people use.
System memory is shared by all devices and software/OS for its own purposes.

It has been empirically proven that if a device takes up more than 10% of the system memory bandwidth, other devices, software/OS start having problems with lags.

For example, 4k monitors connected to the integrated video card, when working with single-channel memory, lead to lags and microfreezes.
Lmao no. Where has that been empirically proven? Show your data or sources, because baseless opinions and conjecture don't fly here.

Also you are comparing very high end Apple silicon, worth at least $1000 for the cheapest machine to the absolute most budget x86 machines so false equivalency. There are a total of ZERO high end x86 machines that run solely on integrated graphics AND single-channel memory. The situation you are describing that results in "lags and microfreezes" only happens to the most budget laptops or setups, because those are the only machines running low speed single channel memory. I can guarantee that if apple had an M chip that ran on >2000mhz of single-channel memory, it would experience the EXACT same issue. Your argument was flawed and illogical from the start.


ariliquin

There are some seriously delusional comments here. Credit where credits due, Apples Silicon is ground breaking in many ways. There will always be trade offs in any design, Apple has them and so does x86, however what Apple and TSMC have developed is significant and worth acknowledging.

All systems benefit from software optimised for their architecture, this is a strength, not a weakness and Apple is no different. Given the performance of software not designed for their silicon, their Ecosystem will only go from strength to strength as Apple and 3rd party developers further optimise.


NikoB

Quote from: nikobisnotok on July 28, 2023, 16:24:58Lmao no. Where has that been empirically proven? Show your data or sources, because baseless opinions and conjecture don't fly here.

Also you are comparing very high end Apple silicon, worth at least $1000 for the cheapest machine to the absolute most budget x86 machines so false equivalency. There are a total of ZERO high end x86 machines that run solely on integrated graphics AND single-channel memory. The situation you are describing that results in "lags and microfreezes" only happens to the most budget laptops or setups, because those are the only machines running low speed single channel memory. I can guarantee that if apple had an M chip that ran on >2000mhz of single-channel memory, it would experience the EXACT same issue. Your argument was flawed and illogical from the start.
These are all questions to the conscientiousness, competence of the authors of reviews on NB. Gradually, a wave of questions that are not answered on the merits, lead to a complete loss of confidence in the reviews ...

The same thing happened in the past on a lot of sites.

You are lying about the high prices for Apple - x86 has long been worth no less in the same classes, but at the same time they are drained to the fullest by many indicators, which is a proven fact.

Microlags are found on expensive machines with single-channel memory. Why should I argue with amateurs in IT?

You look like a paid marketer for companies. producing all this shameful rubbish with an inflated price tag. You do not have any real arguments proving the opposite - just another lie and speculation without evidence. I am sure you have never read a single datasheet for processors in your life.

Against the background of Apple's success, the x86 camp already looks like a bankrupt technologically.

Apple does not give subsidies from the budget at the expense of taxpayers - Intel got out of bankruptcy only at their expense.

I'm glad that Apple gave such a powerful kick of shame to the x86 camp. Competition is good. So far, though, this looks like a complete defeat with far-reaching consequences for Intel, as it controls more than 70% of the x86 market.

Quote from: ariliquin on July 29, 2023, 08:38:29There are some seriously delusional comments here.
Yes, all the crazy ones who are trying to argue with the facts that the x86 has disgraced itself against the backdrop of Apple's success in technology. Their laptops are much more autonomous, quieter and at the same time fast enough. Although their stupid commitment to glossy screens and idiotic keyboards that are inferior for business and work does not do them credit. But that's Apple...

And it will merge further, because gradually no one will need the old x86 code. And the new one only outperforms x86. Now many chipmakers have begun to raise their heads based on Arm. Intel is breathing down the back of the head already Qualcomm, Mediatek and others. That is why Intel hastily came up with a new x86 instruction set extension - APX, because it loses outright to Apple. Even AMD, they lose more than 30% in performance at the same TDP, which is also a proven fact. And AMD is just a pad from the anti-monopolists, deliberately left by the bosses of Intel 20 years ago, so that Intel would not be fragmented by law. In reality, the beneficiaries of Intel and AMD are the same business funds. What should have been done long ago, as with M$ and other companies where the market share is above 50%.

Capitalism turned into banal imperialism as soon as it became profitable for the authorities.

IT Amatuer

Micro freezes and lags when running 4k on igpu isn't a problem exclusive to just running memory in single channel but can happen due to several factors such as too weak igpu in general (not enough shaders / execution units or not high enough clock speed / freq., terrible drivers, etc). Intel's igpu's have been notorious for causing such issues in the past even with dual channel. Also, isn't DDR5 dual channel these days even on a single dimm?

System memory bandwidth needing to be 10 times higher than the consumer devices being connected to and used so that software / OS don't lag. The irony here in this statement is the devices that use significant amounts of bandwidth tend to often require large pools of memory to begin with and we know how much apple like charging an arm & leg for more RAM.

Regarding the apple chip with 400 GB/s and 512 bit memory controller. This is the M1 Max which is priced around $/€ 3000-5000 in the products it's in, depending on configuration. And there are no native windows drivers for it, so if you wish to run anything outside of apple's ecosystem good luck making good use of that 400 GB/s. It's essentially a $/€ 3000-5000 toaster with 400 GB/s bandwidth for Final Cut Pro X. There are no apps for this thing besides apples native 1st party apps.

Unless you're some kind of professional which works in an audio/video industry that is tied down / locked into the apple ecosystem I don't see the point. Even the absurdly priced RTX 4080 notebooks are cheaper than that.

Also, I do not agree that this is a "x86 camp" thing, when essentially nobody else is building 512 bit unified memory architecture's either besides apple (even then only their top end model lineup segment). If anything, it's more a monetization problem. If AMD built such a mobile APU, how many people would be willing to spend $3k-$5k on such a product? Because that's how much it'd cost without any form of software / subscription subsidization. It's not like Nvidia where you can build a 512 bit bus RTX 5090, it's guaranteed to sell at any price because CUDA / AI users have nowhere else to go. And it's not like apple users who are stuck in that walled garden ecosystem either. What I am trying to get at is AMD lacks some killer app software ecosystem, which forces users to be locked into their hardware like Nvidia/Apple.

It doesn't help that AMD doesn't have the same brand loyalty as Nvidia/Apple either. On steam 80% of users are on nvidia and in asia nvidia is synonymous with pc gaming. In the creative scene almost everyone is on macbooks. What does AMD have? They don't really have a market carved out for them where they've a monopoly. I guess consoles, server and handhelds? But that's all just hardware sales. Hardware doesn't generate much money, software does. The biggest tech companies in the world are all software.

NikoB

Quote from: IT Amatuer on July 30, 2023, 02:18:50Micro freezes and lags when running 4k on igpu isn't a problem exclusive to just running memory in single channel but can happen due to several factors such as too weak igpu in general
No, this is 100% a lack of bandwidth problem. The addition of a second module and the inclusion of dual-channel mode, with a doubling of the bandwidth, immediately eliminates all microfreeze lag. This is not about games, but about banal work in 2D.

Try it from the built-in video chip, which officially supports up to 4 4k monitors. Achieve their simultaneous operation on single-channel memory. You will have problems already with one, not to mention 2 or more.

Quote from: IT Amatuer on July 30, 2023, 02:18:50Also, isn't DDR5 dual channel these days even on a single dimm?
No, but you're forgiven, because you are an IT amateur.

Quote from: IT Amatuer on July 30, 2023, 02:18:50This is the M1 Max
No, M2 Max.

Quote from: IT Amatuer on July 30, 2023, 02:18:50is priced around $/€ 3000-5000 in the products it's in, depending on configuration.
Show me at least one consumer or professional laptop for x86 with at least 200Gb/s+. There is no such. =)

Quote from: IT Amatuer on July 30, 2023, 02:18:50I don't see the point.
Therefore, you do not see that you are an amateur who does not even have a close systemic IT education.

Quote from: IT Amatuer on July 30, 2023, 02:18:50Because that's how much it'd cost without any form of software / subscription subsidization.
The question is why AMD built 28 pci-e 5.0 lines into the 7x45 series, of which more than 16 are not used at all in any laptop. Why did she make an expensive and useless block in the chip? Of course, you have no answer, just as there is no real data on how much a 512-bit memory controller and models for it cost.

Quote from: IT Amatuer on July 30, 2023, 02:18:50It's not like Nvidia where you can build a 512 bit bus RTX 5090, it's guaranteed to sell at any price because CUDA / AI users have nowhere else to go.
And it's not.

Quote from: IT Amatuer on July 30, 2023, 02:18:50t doesn't help that AMD doesn't have the same brand loyalty as Nvidia/Apple either. On steam 80% of users are on nvidia and in asia nvidia is synonymous with pc gaming. In the creative scene almost everyone is on macbooks. What does AMD have? They don't really have a market carved out for them where they've a monopoly. I guess consoles, server and handhelds? But that's all just hardware sales. Hardware doesn't generate much money, software does. The biggest tech companies in the world are all software.
AMD simply lost the technology race in NVidia's gpu due to the fault of management. Once upon a time they were leaders...It's always the management's fault. The fish rots from the head...

Bennyg1

Lol. So much trash from the Applelovers here.
Vcache on Zen4 is 2.5GBps. that's bigger than your 800MBps. "Single channel ddr5" doesn't exist, even one stick is dual. Laptops can dissipate 500W+, they're called desktop replacements, and Clevo used to make them with desktop CPU and dual MXM GPU sockets. They're just big and chunky and need high quality components and purpose built spec. I have one on my desk with a 9900K (that it can run up to 170W and 200W+ on 2x GPUs, I've seen over 600W on the wall meter on 10+min benchmarks) and it's still plenty fast at 6 years old. I use a DTR because I am impatient and need the power, and I like moving about, and decided 20 years I can't hack being tethered to the same chair all day.

Apple sucks, they can build purpose built silicon because they have enough of a captured market of zealots who will pay through the nose for it. But when it comes to flexibility, there is none. You chuck out what you have and buy a new, expensive one which you show off wherever you can because you have to validate your decision to buy something double the cost of an x86 laptop.

You can "create" on x86, maybe not quite as power efficiently, but you sure as hell can't game on M1/M2.  If they are so great why aren't game devs devving for Apple?

X86 is flexible and dominant, if there is a market for accelerators Intel will go that way and make it happen, but for now, an efficient repurposed server core with stacked 3D vcache is the best we can get on consumer platforms and in case anyone has remembered what the news post these comments are on, it's coming to laptops.

Another sucky Asus laptop, so we have to hope a proper brand is gonna make a proper model capable of decent performance that'll last a few years instead of this 250W TBP rubbish.

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview