News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Gigabyte Aorus RTX 4090 Gaming Box review: Nvidia's fastest consumer graphics card is massively restrained by Thunderbolt

Started by Redaktion, January 28, 2024, 18:28:33

Previous topic - Next topic

Redaktion

With the Gigabyte Aorus RTX 4090 Gaming Box, Gigabyte offers an eGPU based on the Nvidia GeForce RTX 4090. This now allows you to easily connect the fastest graphics card model for the consumer segment to your laptop or mini PC. But there are also some things to consider. We analyze in this review what exactly these are and what performance you can expect.

https://www.notebookcheck.net/Gigabyte-Aorus-RTX-4090-Gaming-Box-review-Nvidia-s-fastest-consumer-graphics-card-is-massively-restrained-by-Thunderbolt.797915.0.html

NikoB

I have already written dozens of times that TB4/USB40 are a shameful anachronism, outdated before their implementation. As well as the future USB40 2.0/TB5 - both are 2 times faster than USB40/TB4.

Zen4 7x45 series processors have as many as 28 pci-e 5.0 lanes, of which more than 16 are simply hanging in the air! Why the hell did AMD even add this stupid pci-e 5.0 controller to Zen4 if there is not a single laptop and video card with pci-e 5.0?

What prevented all manufacturers of laptops with 7x45HX from bringing out a special x16 pci-e 4.0 port?! Which takes up only 8 pci-e 5.0 lines! Those. Even with x16 pci-e 4.0 installed externally, such laptops simply have more than 10 pci-e 5.0 lines hanging in the air!

SECOND problem:
Manufacturers should long ago abandon copper wires in the wire to the eGPU and any docking stations in general!

It's time to switch to optics from 500Gbit/s+ over cables up to 50-100m!

And the transition to such ports will require RAM speed at the HBM3 level - 500GB/s-1TB/s.

The x86 platform is at a complete technological dead end!

George

Fairly to mostly useless overkill.

Sure, SOME Intel powered systems have TB. Even a few AMD systems have the additional TB chip to enable this functionality.

However - to what end?

Just about ANY system that ONLY has a iGPU could benefit SOME from the addition of a eGPU. Just HOW MUCH benefit is largely measured first by how WEAK the iGPU is to begin with.

The MAJOR DIFFERENCE being that the eGPU 'unlocks' or enables the user to use programs that they were unable to before or may of been able to 'run' but were unplayable/usable with only their iGPU.

While IMHO the 4090 is complete overkill where a 4050 or 4060 would of 'opened the door' to current generation driver features and benefits. (they COULD of tossed a 4060-16GB in there and called it good!)

With todays connection technologies an eGPU will never perform as well as a desktop dGPU so why bother comparing them?

Neenyah

Good comment overall, George. The issue is by simply going to the overkill route where you pay too much to not get that much in return. The 3070 Ti would perform almost the same as the 4090 here for fair lower price. That being said...

Quote from: George on January 28, 2024, 21:24:05With todays connection technologies an eGPU will never perform as well as a desktop dGPU so why bother comparing them?

...because you get better performance (for less money!) with laptop + eGPU than with a whole gaming laptop, and you also oftentimes get more VRAM. I'll give you my example with Razer Core X + RX 6800 XT; I can play quite literally every existing game I want to play in native (so no FSR/DLSS needed) 1440p ultra/maxed settings at 60+ fps - with my ThinkPad X1 Carbon. Both on its internal screen and external (30-50% more fps) 1440p 240 Hz.

For comparison, Shadow of the Tomb Raider with iGPU gets a whole 4 fps (yes, 4) at 1440p fully maxed detail, with eGPU 113 on internal and 142 on external screen with same graphics preset. Quite a difference, no? Now compare that with the RTX 4070 laptop - 107 fps average fps, 6-35 fps less.

Currently the cheapest laptop in the EU with the 4070 is this MSI Katana 17 for almost 1400€. I paid 280€ for the brand new Razer Core X + 490€ for the 6800 XT, so 630€ less to get more performance and twice more VRAM (16 GB vs 8 GB in the 4070) than with the cheapest (and fairly crappy) 4070 laptop. Plus I can sell that GPU, replace it whenever I want, plug it into my desktop PC when I need it... you name it. Is my 6800 XT slower than what it would be in a desktop PC? Yes, of course. Is it about as fast as the RTX 4080 laptop? Yep. And we know how expensive those 4080 laptops are (2600€-ish the cheapest one in the EU)...

So if you need laptop to fit your work and lifestyle but you don't want to get a full-fledged (large and heavy) gaming laptop or another PC you can work perfectly well with different eGPU options if you want to game at home. Will desktops still be much faster? Absolutely. But if you don't want nor need desktop... (I use my desktop just for CS2 pretty much, laptop for everything else including work)

But yeah, the 4090 is definitely a massive overkill for current Thunderbolt limitations.

-

Edit: It's actually wrong to pay attention to fps at all though, with eGPU I mean; you need to play at the highest possible resolution to shift the load as much to the GPU as possible, so 1440p or higher ideally. But then you can't get super-high fps in most triple-A games due to the obvious reasons. If you get too low in resolution, so 1080p of below, you are either CPU bottlenecked or Thunderbolt bottlenecked (or both). You basically want to hold the GPU load at 90%+ (but below below 99-100% to not get any input lag) because with anything less than that you are either playing in too low details or resolution or you are bottlenecked with CPU/Thunderbolt/both. That is clearly shown here in this article where the 4090 is beasting at 4K resolution with similar or more fps in most games than at much lower 1440p. Just check Star Wars Jedi Survivor - 66.8 fps at 1080p low vs 60.5 fps at 4K maxed).

The point of the eGPU is not to get billions of fps (120-200 is max in pretty much all games with some exceptions) but to play anything you want in maxed settings at stable 60-100 fps. You don't really need 300 fps to play Stray or Cyberpunk, 70-90 is perfectly sufficient when everything is maxed at 1440p or even 4K. 1% lows are not a problem at all with good GPU in an eGPU setup though, 60+ fps is guaranteed in almost all titles.

davidm

When is notebookcheck going to grow up and stop being obsessed with "gamers." Are a lot of readers spending all their time playing Shadow of the Tomb Raider, or is it just this weird obsession over frame rates, in reviews that haven't evolved in four decades since PC Magazine?

I would bet a large and increasing proportion of 4090 users are doing AI and creative work. That is where things get interesting, including, if you must, AI in games. And AI has different characteristics, for every part of the system, than these frame rate focused reviews.

Neenyah

Quote from: davidm on January 28, 2024, 23:13:57When is notebookcheck going to grow up and stop being obsessed with "gamers."
When manufacturers stop being obsessed with gamers primarily? Gigabyte Aorus RTX 4090 Gaming Box 🤔

From Gigabyte's official product site:

"Powerful GeForce RTX™ 4090 delivers incredible performance for gamers and creators"

"For GAMERs

The AORUS RTX 4090 GAMING BOX transforms an ultrabook laptop into the ultimate gaming rig, delivering incredible performance for real-time ray tracing and graphics-intensive games. A network chip that allows you to connect to a wired network is built into the GAMING BOX. You don't have to worry about transmission interference during the game. Install the GIGABYTE CONTROL CENTER to adjust the RGB lighting and performance for your preference."



Quote from: davidm on January 28, 2024, 23:13:57I would bet a large and increasing proportion of 4090 users are doing AI and creative work.
Literally any current GPU, even with TB over two PCIe lanes, is a massive upgrade over any existing iGPU. Data is shown in the review tho.

Btw, my comment about gaming was related to George's very accurate comment above mine, about the 4090 being too expensive and complete overkill for what you get, mainly in terms of price:performance because you get about the same performance as with much cheaper GPU due to the Thunderbolt bottleneck. I don't care about AI/ML but my 6800 XT in an eGPU setup with my X1 Carbon is slightly faster in After Effects (CC 2023) than a desktop 4060 in a desktop, if that means anything useful to you.

davidm

Quote from: Neenyah on January 28, 2024, 23:29:18When manufacturers stop being obsessed with gamers primarily? Gigabyte Aorus RTX 4090 Gaming Box 🤔

From Gigabyte's official product site:

"Powerful GeForce RTX™ 4090 delivers incredible performance for gamers and creators"

"For GAMERs

The AORUS RTX 4090 GAMING BOX transforms an ultrabook laptop into the ultimate gaming rig, delivering incredible performance for real-time ray tracing and graphics-intensive games. A network chip that allows you to connect to a wired network is built into the GAMING BOX. You don't have to worry about transmission interference during the game. Install the GIGABYTE CONTROL CENTER to adjust the RGB lighting and performance for your preference."



Quote from: davidm on January 28, 2024, 23:13:57I would bet a large and increasing proportion of 4090 users are doing AI and creative work.
Literally any current GPU, even with TB over two PCIe lanes, is a massive upgrade over any existing iGPU. Data is shown in the review tho.

Btw, my comment about gaming was related to George's very accurate comment above mine, about the 4090 being too expensive and overkill in terms of price:performance because you get about the same performance with much cheaper GPU. I don't care about AI/ML but my 6800 XT in an eGPU setup with my X1 Carbon is faster in After Effects than a desktop 4060 in a desktop, if that means anything useful to you.

The fact it is called "gaming" doesn't really mean anything, except it's *marketed* toward gamers. There's not really any reason for each company to have 12 different versions of the same product except marketing, but they are each going to perform the same for the different tasks, and that is why many professionals end up buying products emblazoned with "Gaming" and a dragon and lightening bolt or whatever on the package. The fact is, especially given its 24GB VRAM, it's a decent chip for AI and creative work. This is notebookcheck, it's not limited to notebooks, it shouldn't be limited to gamers and their one track minds (and I suspect many of them are more obsessed with stats than actual experience).

As I mentioned, the performance characteristics of the entire system are very different for AI/creative.

Neenyah

Quote from: davidm on January 28, 2024, 23:39:28The fact it is called "gaming" doesn't really mean anything, except it's *marketed* toward gamers. The fact is, especially given its 24GB VRAM, it's a decent chip for AI and creative work. This is notebookcheck, it's not limited to notebooks, it shouldn't be limited to gamers and their one track minds (and I suspect many of them are more obsessed with stats than actual experience).

As I mentioned, the performance characteristics of the entire system are very different for AI/creative.
Well I know David and I agree with you completely, I'm just saying that if the manufacturer is literally in its name calling it "gaming" it would be weird for Notebookcheck to not include gaming performance. They did a great job by covering a lot of possible usages, IMHO, from serious apps to various games. But games are also quite an excellent benchmark for exactly eGPUs because in pro apps you can't really bottleneck eGPU setup as easy and as quickly as with games (where you can see bottleneck in less than 20 seconds as soon as shaders load/cache), so if you pay attention to wider picture you can get to your own conclusions if they [your desired and used apps] weren't mentioned in the article. Again - I agree with you, don't think the opposite.

davidm

Quote from: Neenyah on January 28, 2024, 23:41:45
Quote from: davidm on January 28, 2024, 23:39:28The fact it is called "gaming" doesn't really mean anything, except it's *marketed* toward gamers. The fact is, especially given its 24GB VRAM, it's a decent chip for AI and creative work. This is notebookcheck, it's not limited to notebooks, it shouldn't be limited to gamers and their one track minds (and I suspect many of them are more obsessed with stats than actual experience).

As I mentioned, the performance characteristics of the entire system are very different for AI/creative.
Well I know David and I agree with you completely, I'm just saying that if the manufacturer is literally in its name calling it "gaming" it would be weird for Notebookcheck to not include gaming performance. They did a great job by covering a lot of possible usages, IMHO, from serious apps to various games. But games are also quite an excellent benchmark for exactly eGPUs because in pro apps you can't really bottleneck eGPU setup as easy and as quickly as with games (where you can see bottleneck in less than 20 seconds as soon as shaders load/cache), so if you pay attention to wider picture you can get to your own conclusions if they [your desired and used apps] weren't mentioned in the article. Again - I agree with you, don't think the opposite.

OK, well thanks for that. But it's not exactly the same, it would be helpful if typical leading tasks (Stable Diffusion, llama) were front and centre benchmarks, without having to decipher from how many frames per second Lara Croft is rendered. Is there a Lara Croft FPS to llama token/s calculator I'm not aware of?

Neenyah

Quote from: davidm on January 28, 2024, 23:48:18OK, well thanks for that. But it's not exactly the same, it would be helpful if typical leading tasks (Stable Diffusion, llama) were front and centre benchmarks, without having to decipher from how many frames per second Lara Croft is rendered. Is there a Lara Croft FPS to llama token/s calculator I'm not aware of?
I'm not an expert in the AI/DL/ML field but I'm slowly learning (not interested too much in it, tbh, that's why I'm snailing; who knows, perhaps I would be faster if there was Lara Croft involved 😋). Judging by the info I gathered so far in recent months, mainly from r/machinelearning, r/egpu, egpu.io and various different YT videos on that matter, you can expect about 5-15% loss over what you'd get with the same GPU in a desktop. With that being said, I believe that used/refurbished 3090 24 GB would do wonders for way lower price than the 4090, giving better price:performance return.

davidm

Quote from: Neenyah on January 29, 2024, 00:01:24
Quote from: davidm on January 28, 2024, 23:48:18OK, well thanks for that. But it's not exactly the same, it would be helpful if typical leading tasks (Stable Diffusion, llama) were front and centre benchmarks, without having to decipher from how many frames per second Lara Croft is rendered. Is there a Lara Croft FPS to llama token/s calculator I'm not aware of?
I'm not an expert in the AI/DL/ML field but I'm slowly learning (not interested too much in it, tbh, that's why I'm snailing; who knows, perhaps I would be faster if there was Lara Croft involved 😋). Judging by the info I gathered so far in recent months, mainly from r/machinelearning, r/egpu, egpu.io and various different YT videos on that matter, you can expect about 5-15% loss over what you'd get with the same GPU in a desktop. With that being said, I believe that used/refurbished 3090 24 GB would do wonders for way lower price than the 4090, giving better price:performance return.

That's what I mean, the degradation for "AI" isn't as bad as it is for gaming.
Algorithms are still being optimized for 4090, but it can be nearly twice as fast as a 3090. That's pretty compelling, though 3090 is cheaper.

Aleksei


LL

QuoteTo what end?

Well i wanted to this to render in Blender or Unreal with a laptop as a laptop have GPU limitations.

But unfortunately no GPU render engine or unreal in this test was posted.

The author talking about "professional" applications is really old 2000's language about viewport performance of some 3D applications...an really low reason to buy something like this.

This is still interesting review but disappointing that one of most logical cases to use a device like this is not acknowledged: Rendering.

K

This is next level stupidity, the industry should work on an open standard for connecting external PCIE devices at full speed. Or at least making Oculink more wide spread, sigh.

NikoB

All gaming cards are mediocre in professional tasks for a simple reason - the FP64 calculation units are deliberately strangled there - the speeds are an order of magnitude lower than in professional cards, and these lines deliberately remove all the necessary extensions in the drivers for OpenGL. Like the VRAM size, it is several times smaller.

Nvidia and AMD corporations clearly monitor the division of niches and only due to some mistake can they temporarily omit high-performance high-precision units for floating point operations.

So the available professional task classes are extremely limited on the game maps. They are simply not suitable for this.

Quick Reply

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview