The Total Graphics Power (TGP) figure for the Nvidia GeForce RTX 4090 has supposedly been confirmed. Apparently, it stands at a toasty 600 W, considerably higher than its Ampere counterpart. In addition, an astonishing figure of 800 W or more for the potential GeForce RTX 4090 Ti has also been suggested.
https://www.notebookcheck.net/Sizzling-GeForce-RTX-4090-TGP-allegedly-confirmed-at-600-W-with-potential-for-over-800-W-for-the-RTX-4090-Ti.607980.0.html
Quoteallegedly confirmed
!?
This is reminding me a lot of the wall Intel ran into with the Pentium 4 / NetBurst Architecture. Sure, we can keep scaling up chip count as GPUs run natively parallel workloads but it's seeming like process shrinks aren't improving performance per watt like I assume Nvidia would hope. At this point it seems like the microarchitecture is as small as it can get so the only path towards greater performance is doubling chips per card but that's gonna kill both fab output and increase energy consumption. What's next, 10MW PSUs required for 2028's RTX 8080 Ti?
The M1 has shown that 5nm can be power efficient so Nvidia would only have itself to blame if it can't create power efficient Lovelace.
There is, however, a possibility that 4090 and 4080 are suicide bombers just for the show against AMD's competition while lower tier cards might keep Ampere's TDP levels or improve on them. Well, one can hope for a second...