News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Post reply

The message has the following error or errors that must be corrected before continuing:
Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.
Other options
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview

Topic summary

Posted by NikoB
 - July 29, 2023, 15:00:31
The trouble is that a really new level of virtual reality in games requires not 7-10 times, but 10,000-70,000 times more performance from a consumer chips. Moreover, the ordinary level, and not top segment. It's a dead end of silicon. It is a fact.
Posted by LL
 - July 28, 2023, 18:50:03
1080ti spends 14.8 more time rendering the 3 scenes and has a 250w TDP while the 4090 has only an increase to 450w tdp which is not even the double power.

So the 4090 is more than 7x more economical than 1080ti in Blender GPU rendering.
Posted by LL
 - July 28, 2023, 18:44:11
Quotegpu performances have been increasing because power levels are now skyhigh

Ypu know nothing what you talking about. I say this plain and simple for clearness.

How much energy do these 2 cards spend making the render of Blender benchmark in a 4090 GPU vs a let's say a 1080 Ti card.

4090  have 13081.58 points
1080 Ti   have 879 points

So you need to have the 4090 spending more than 14.8x de energy for them to be equal in energy spending while rendering the 3 scenes of said benchmark.



Posted by NikoB
 - July 28, 2023, 12:22:43
www.notebookchat.com/index.php?topic=175627.0
Posted by hehehaha
 - July 28, 2023, 03:59:44
Quote from: NikoB on July 27, 2023, 14:10:23This is all useless due to the shameful speed of the x86 memory controllers. What's the point of super-fast VRAM if seamless level loading is limited by system memory bandwidth, which is more than 14 times less than the speed of the future 5090?

The shame of x86 is growing. x86 is at a dead end due to architectural flaws.

At the same time, servers are using HBM memory with a speed of more than 700 GB / s with might and main.

x86 is an increasingly less balanced architecture, in which literally everything is sewn with white thread and consists of many crutches, like senseless attempts to increase the L3 cache (they have already reached L4). For sustainable performance, it's all dead poultices until the RAM bandwidth is increased by at least 5 times immediately, like on the Apple M2 Max.

system ram only limits load times but even that would be reduced with direct storage in windows 11.
otherwise cpu limitations play a bigger role than ram alone. any increase in l3 cache would mean increase in memory access latencies, addressing l3 cache latencies would mean an increase in power consumption even at idle.

in yesteryears gpu performance would never increase at this rate, gpu performances have been increasing because power levels are now skyhigh, it would never be acceptable a decade ago for a gpu to consume 600w.
if we allowed cpus to consume 600w too, i'm sure there wouldn't be any issue with cache or ram but not many gamers would accept a 1500w pc just to run counter-strike.
Posted by LL
 - July 27, 2023, 19:25:15
To the writer. You say in the opening text the card will arrive 2023 - i believe a typo - but in main text 2025.
Posted by NikoB
 - July 27, 2023, 14:10:23
This is all useless due to the shameful speed of the x86 memory controllers. What's the point of super-fast VRAM if seamless level loading is limited by system memory bandwidth, which is more than 14 times less than the speed of the future 5090?

The shame of x86 is growing. x86 is at a dead end due to architectural flaws.

At the same time, servers are using HBM memory with a speed of more than 700 GB / s with might and main.

x86 is an increasingly less balanced architecture, in which literally everything is sewn with white thread and consists of many crutches, like senseless attempts to increase the L3 cache (they have already reached L4). For sustainable performance, it's all dead poultices until the RAM bandwidth is increased by at least 5 times immediately, like on the Apple M2 Max.
Posted by Redaktion
 - July 27, 2023, 13:51:17
A new leak from Kopite7kimi says the GeForce RTX 5090 could feature a 512-bit memory bus. The graphics card is due to launch sometime in 2023. It is said be manufactured on TSMC's 3 nm node and feature GDDR7 memory.

https://www.notebookcheck.net/Nvidia-GeForce-RTX-5090-could-feature-a-significantly-higher-memory-bus-than-the-RTX-4090.736793.0.html