It starts go get interesting: With almost 600 Watt TDP power consumption, how much further can and want NVIDIA to increase the power consumption for the next generation and will they even? Using the same N4P node, for another 30+% gen-over-gen improvement, NVIDIA would have to increase the GPU chip's size and power consumption by roughly the same 30+%. 575W * 1.3 = 750W. Even more people would complain. Ofc, NV will be using a better node, like N3P or N2P. (PS: NV may again put a "TSMC 3N process (3 nm custom designed for Nvidia)" ("not to be confused with N3") tag on their new gen, just to pretend that it's not really a generic N3P / N2P node underneath. But I guess the difference of 5-10% in certain characteristics (not power efficiency), does warrant the "custom designed" fake-it-till-you-make-it tag or there is really no change and they just want to sound important.)
TSMC' full node improvement over previous gen: +15% power efficiency improvement at same power. +25-33% power efficiency improvement at same performance. (this is what I roughly remember, numbers might be different)
Imagine NV went 2 full nodes from N4P to N3P to N2P using "at same performance" (instead of "at same power"): 575 Watt * 0.67 * 0.67 = 260 Watt. Now that would be nice. Ofc, there would be no performance improvements, but I personally would be ok with this, but I'd be probably in the minority.
An Nvidia RTX 5090 is a dream purchase for many buyers, but it also carries risks. A gamer's account adds to the GPU's troublesome track record after the attached cable caught fire. Critics believe the 12V-2x6 connector failed to handle the immense power draw, but a subpar PSU may also be a factor.