News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Post reply

The message has the following error or errors that must be corrected before continuing:
Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.
Other options
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview

Topic summary

Posted by _MT_
 - October 25, 2021, 14:33:03
Quote from: Sanjiv Sathiah on October 21, 2021, 11:40:25
@_MT_ Here's the link to Apple's page on the Radeon Pro Vega II Duo: [...]

"The Vega II graphics processors are connected internally using the included Infinity Fabric Link, which enables supported applications to transfer data directly between Vega II GPUs up to five times faster than PCI Express."

Highlights:
"Supports Infinity Fabric Link (integrated on board)"


Five times "faster" than PCIe doesn't mean "faster" than possible over PCB. And since the text is aimed at consumers, it doesn't even mean that an individual link is five time "faster." If we assume x16 as a baseline, than simply using 80 links instead of 16 gives you five times the bandwidth. Using the very same technology. Piece of cake. And if you use PCIe 3.0 as a baseline and 4.0 (or equivalent) internally, then 40 links would do the job. That's not the only possibility but it's a trivial example. Coincidentally, I believe Infinity Fabric uses PCIe links between processors.

Also, their use of internally refers to a module (a card), not a processor. Infinity Fabric is, in this case, between two processors within a module. Not within a processor. Which means it's going over a PCB. As is evident from the picture.
Posted by HEC3
 - October 22, 2021, 18:19:11
"This special fabric is much faster than connecting the two GPUs over a printed circuit board (PCB). Instead of a PCB connection, the fabric is made from silicon and with the wiring between the chips as small as in the chip itself; hence the awesome bandwidth." 
 
You are confusing the marketing name of an electrical interconnect protocol with the description of a physical implementation that is not even use by it. 

Infinity Fabric is a protocol that AMD uses to connect multiple dies in Multi-Chip-Modules (such as in the Threadripper processors) via a regular package substrate not made of silicon and ALSO to connect multiple GPUs/Acelerators/CPUs via a regular PCBs (like NvLink).

In none of those cases they are using a silicon interposer to connect the dies as you are implying. Although in a future implementation they may do it. 
 
You are confusing an electrical protocol (Infinity Fabric) with a physical 3D/2.5D packaging method (such as TSMC's COWOS or Intel's Foveros) using actual silicon to make the physical interconnections between multiple dies. 
 
And in any case this discussion is pointless to Apple as they are not going to use an AMD-only protocol for their chips. If Apple glues 4 M1 dies they will most likely develop its own electrical communication protocol and outsource physical implementation to TSMC using CoWoS.
Posted by Sanjiv Sathiah
 - October 21, 2021, 11:40:25
@_MT_ Here's the link to Apple's page on the Radeon Pro Vega II Duo: https://www.apple.com/au/shop/product/MW732ZA/A/radeon-pro-vega-ii-duo-mpx-module

"The Vega II graphics processors are connected internally using the included Infinity Fabric Link, which enables supported applications to transfer data directly between Vega II GPUs up to five times faster than PCI Express."

Highlights:
"Supports Infinity Fabric Link (integrated on board)"
Posted by Sanjiv Sathiah
 - October 21, 2021, 11:35:53
@_MT_ Yes, the picture shows two packages on a PCB - but you are incorrect in suggesting that the signals between the GPUs are going through the PCB. The AMD supplied picture is also clearly annotated to show that the GPU dies are communicating through the Infinity Fabric embedded within the PCB. This is the version of fabric interconnect that is outside the package connecting them as you describe. Read up on the Radeon Pro II Duo and you will find I am 100 percent correct.

Fabric is relatively new technology when used in this way and in the way Apple has used it to scale up the M1 to the M1 Pro and then the M1 Max. This is why Johny Srouji called it out in particular during the Apple Event. I am now using article to explain this term to a wider readership in a simple in an accessible way.

I am also using to it to point to the two ways Apple could scale its chips up further. Either with fabric interconnect within the chip as in the M1 Pro and M1 Max or to adopt an approach similar to AMD has here in the GPU, but with Apple's SoCs connected externally over fabric.

Not all our readers are aware of silicon interconnect fabric and its implications for the way Apple might further scale its chips designs for a workstation computer like the Mac Pro.
Posted by _MT_
 - October 21, 2021, 10:28:18
And the overall tone is... I don't know. What is the point of the article? Fabrics are normal. All the cores have to communicate with shared resources, the outside world and also with each other. How did you imagine a processor works? You shouldn't be surprised by that word. And if we are expecting a chiplet design for their larger processors, then that implies a silicon interconnect fabric. Like AMD uses. The advantage over big monolithic designs lies in higher yields and therefore lower expenses for the manufacturer. I don't want to insult you but it really reads to me like you've got no clue - you don't fully understand the words you're using. You act like something new was discovered but I don't see it. For me, pleasant news was the 8/2 split between CPU cores. That's good and I wasn't daring to hope for it. I certainly hoped they won't go beyond four efficiency cores that the M1 has. Reduction to two is nice. Especially for desktops.
Posted by _MT_
 - October 21, 2021, 10:06:34
I don't think you know what you're talking about. You say that: "Before the M1 series of chips, the first Apple device to feature silicon interconnect fabric was the current Mac Pro. Its AMD Vega Pro II Duo GPU features two discrete Vega GPUs connected with AMD's Infinity Fabric connecting them together to work effectively as one super powerful GPU over a connection with an 84GB/s bandwidth. This special fabric is much faster than connecting the two GPUs over a printed circuit board (PCB)." Yet the picture shows two packages on a PCB. Surely, signals are going through a PCB. And big bandwidths are possible on a PCB, as evidenced by RAM. While silicon integrated fabrics do exist, this is not an example. There are, essentially, two flavours of Infinity Fabric. One is inside a package and interconnects all the different modules that make up a processor. That one is silicon interconnect fabric. And one is outside a package and connects packages together. Like in a dual-socket Epyc server. But that one is definitely not silicon interconnect fabric. If there is a silicon interconnect fabric, it's inside a processor, not between them. In the case of Infinity Fabric, it's unified, it acts as one fabric, but one uses silicon links and the other uses copper links inside a PCB.
Posted by Redaktion
 - October 21, 2021, 08:36:43
Apple's new M1 Pro and M1 Max silicon is incredibly powerful. Apple alluded to how it was able to scale these chips up from the M1 using what it called 'fabric'. This fabric points the way to how Apple will further scale up its chip architecture for the Mac Pro.

https://www.notebookcheck.net/Apple-s-use-of-fabric-in-the-M1-Pro-and-M1-Max-chips-points-how-it-will-scale-up-its-chips-for-the-next-Mac-Pro.574212.0.html