NotebookCHECK - Notebook Forum

English => News => Topic started by: Redaktion on March 23, 2021, 18:01:34

Title: Merger between CPUs and RAM proposed by SK Hynix CEO
Post by: Redaktion on March 23, 2021, 18:01:34
In his keynote for the Institute of Electrical and Electronics Engineers' International Reliability Physics Symposium, SK Hynix CEO Seok-Hee Lee talks about the inevitable convergence of memory and logic, the CXL standard that could replace PCIe as a faster link between CPU and graphics cards or smart memory interfaces, plus the road to 600-layer NAND chips.

https://www.notebookcheck.net/Merger-between-CPUs-and-RAM-proposed-by-SK-Hynix-CEO.528998.0.html
Title: Re: Merger between CPUs and RAM proposed by SK Hynix CEO
Post by: MBV on March 23, 2021, 18:34:39
Well I guess it is nothing new but just a increased trend bring as much mermory as close as possible to the compute. Latest of which you could see at Apples M1, AMD InfinityCache etc.Transfering data external costs lots of time and especially energy.

All of them Intel, Nvidia, AMD, Apple etc. will work towards on-die or on -chip memory to improve performance and energy consumption.

We will see server chips with HBM and CPUs with big L4 caches using 2,5/3D Stacking. MRAM etc. might also become more popular.

By 2025 this might be more common that lots of people expect right now...
Title: Re: Merger between CPUs and RAM proposed by SK Hynix CEO
Post by: kek on March 23, 2021, 18:50:42
Hmmm sorry but I'm not aboard on having soldered memory, at least on laptop/desktop space.

They talk all about speed and what not, but is it really that much noticeable on day to day activities? Are we sure most programs are taking advantage of all the resources they have available?
Title: Re: Merger between CPUs and RAM proposed by SK Hynix CEO
Post by: Anonymousggg on March 23, 2021, 19:03:37
Quote from: kek on March 23, 2021, 18:50:42
Hmmm sorry but I'm not aboard on having soldered memory, at least on laptop/desktop space.

They talk all about speed and what not, but is it really that much noticeable on day to day activities? Are we sure most programs are taking advantage of all the resources they have available?

It's not about soldered memory. It's ultimately about putting RAM layers nanometers or microns away from a CPU layer. That will boost performance and power efficiency by orders of magnitude. Check out 3DSoC.

Even if they do this, you could still have a larger amount of RAM away from the CPU in a traditional DIMM form factor.
Title: Re: Merger between CPUs and RAM proposed by SK Hynix CEO
Post by: Wereweeb on March 23, 2021, 23:10:50
If you consider this "Soldered memory", then I regret to inform you that all CPU's have "Soldered memory". It's called the CPU cache. It has been around for many decades.

The control over the conditions and tolerances of an in-package memory are much, much tighter than that of PCB-soldered DRAM inside a poorly designed hump of junk with a keyboard and screen. (A.k.a. "consumer laptop")

The only "revolutionary" thing here will be bringing compute into the memory, a stopgap measure to increase energy efficiency while we still rely on electrical memory buses. But it's not a new idea.

Bringing memory closer to the CPU has also been done to death (See: Broadwell), it's just that it makes much more sense now.
Title: Re: Merger between CPUs and RAM proposed by SK Hynix CEO
Post by: ariliquin on March 24, 2021, 08:31:56
Yes please, why is it taking so long? Hurry up.
Title: Re: Merger between CPUs and RAM proposed by SK Hynix CEO
Post by: _MT_ on March 24, 2021, 10:23:46
The problem with something like HBM is going to be cost. So, we're probably looking at a cache. And in that case, I wouldn't be surprised if main memory moved away from DDR SDRAM and towards volatile flash. Volatile because it has much better endurance than persistent flash. Persistent writes are what's responsible for most of the wear. After all, volatile flash is already being used in servers. With standard DDR SDRAM acting as cache. Because flash is a lot cheaper. There is a lot of appetite for capacity in the server world, even at the expense of latency (large SDRAM modules also sacrifice latency for capacity).

Sure, you can change topology. Split a big processor with many cores into smaller packages and then integrate them onto memory modules (thus creating a compute module). Yes, you'll have compute cores closer to memory. But in doing so, you're putting cores further apart. In some applications, this would be perfect. In others, you've got to deal with a lot of core-to-core communication, access to shared memory, memory consistency.

Historically, there has been a trade-off between latency and capacity. That's why modern processors have multiple levels of caches. Even RAM can be seen as a cache for persistent storage. Ultimately, it's a compromise. And benefit depends on the workload and how well prefetching works. Sometimes, you can do a very good job of hiding latency. Reducing something that's hidden doesn't do you much good. There is also throughput. And if you're bound by that, it generally dictates how many cores it makes sense to install. It's more related to compute density. If you're throughput limited, it means fewer cores per socket and that means fewer cores per rack, which means more racks for the same compute power, which means more floorspace and longer cables.

I don't know how relevant is this to consumers. Unless they want to go the integrated path like Apple. Which would ultimately mean less choice for consumers.
Title: Re: Merger between CPUs and RAM proposed by SK Hynix CEO
Post by: _MT_ on March 24, 2021, 10:51:41
Quote from: Anonymousggg on March 23, 2021, 19:03:37
It's not about soldered memory. It's ultimately about putting RAM layers nanometers or microns away from a CPU layer. That will boost performance and power efficiency by orders of magnitude. Check out 3DSoC.

Even if they do this, you could still have a larger amount of RAM away from the CPU in a traditional DIMM form factor.
Surely that's nonsense. That would imply the CPU is idling at least 99 % of the time. That would give you two orders of magnitude improvement if you could remove idling completely. But even L1 cache has latency. That would imply a workload where very little work is done on any given chunk of data and which is probably jumping all over the place, defeating prefetching. Which probably indicates a poor software design. Although there are problems which are not cache friendly. And here is the rub. It's still going to be slower than L1 or L2 or even L3 (at least in the case of HBM). Those caches are on the die already. Presence of HBM won't make them any faster. Putting a limit on what improvement is achievable.

Yes, we've been trying to have as much memory as close to a processor as we can afford since forever. We're limited by technology. And there are applications where this is interesting. But big desktop computers are not the prime candidate. And I can't imagine the "orders of magnitude" increase. Even if I take it in the most modest meaning of two orders.
Title: Re: Merger between CPUs and RAM proposed by SK Hynix CEO
Post by: _MT_ on March 24, 2021, 10:59:27
Quote from: _MT_ on March 24, 2021, 10:23:46
I don't know how relevant is this to consumers. Unless they want to go the integrated path like Apple. Which would ultimately mean less choice for consumers.
I should probably clarify that we should distinguish laptops and SFF desktops from big desktops. This is certainly interesting when it comes to integrated GPUs. Those can really benefit from something like HBM. So, it can be relevant to many consumers when it comes to computers.
Title: Re: Merger between CPUs and RAM proposed by SK Hynix CEO
Post by: oliv on March 25, 2021, 14:06:34
Maybe they mean something like that :
www.upmem.com/technology/

where the CPU (here a custom made CPU) is really inside the DDR.