QuoteFor starters, the handful of extra CUDA cores might not translate into much of a performance uplift, and the extra power requirements might result in melted components
10% more cores gonna result in 3-5% more perf and 10% higher power consumption. 575W * 1.1 = 632.5W. Really interesting how far they can push the GPU power consumption. Make the connector even smaller and push 1000W through it. *grabs popcorn*
I could see nVidia launch a $3499 5090Ti with 48GB GDDR7.
Quote from: opckieran on February 09, 2026, 17:17:24I could see nVidia launch a $3499 5090Ti with 48GB GDDR7.
This makes sense as a pro-sumer AI GPU, to fill the gap between the Gaming GPUs and their 96gb Enterprise cards..
A market for consumers that need the high power and higher VRAM for bigger models without having to pay Enterprise prices.. a market currently only filled by the DXG Spark in the same price bracket.
Quote from: Mitsie on February 09, 2026, 21:30:34Quote from: opckieran on February 09, 2026, 17:17:24I could see nVidia launch a $3499 5090Ti with 48GB GDDR7.
This makes sense as a pro-sumer AI GPU, to fill the gap between the Gaming GPUs and their 96gb Enterprise cards..
A market for consumers that need the high power and higher VRAM for bigger models without having to pay Enterprise prices.. a market currently only filled by the DXG Spark in the same price bracket.
There'd definitely be a market for it too: top gaming SKU, large VRAM capacity and high speed for pro/AI tasks. DGX Spark is a neat niche: huge VRAM capacity but limited processing speed due to thermal constraints, core count, and limited memory speed and bus width. Also, no x86 limits DGX compatibility.
I can't imagine the day I am more looking forward to integrated graphics advancements (with Intel and AMD) than new graphics card releases. But alas..
48 GB VRAM are possible using 3 GB GDDR7 memory chips, instead of the current 2 GB ones, but a consumer priced 48 GB VRAM version would compete with the much more profitable RTX PRO 5000. I don't see it happening yet ["late 2026"]. But NVIDIA also released a 96 GB VRAM RTX PRO 6000, which was unexpected. Still, not sure I'd be interested in a 48 GB VRAM GPU, when the power consumption is 575W+. I know one probably could reduce the TDP by -30% to -50%, but ideally I'd like to see a MaxQ version (tho this would be unusual for a consumer GPU?) or the same chip using the next TSMC node shrink / N3P (the same arch on a full node shrink is a rare occurrence tho). But 48 GB VRAM per GPU is where it starts to get interesting for LLM self-hosting.