News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Post reply

Other options
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview

Topic summary

Posted by NikoB
 - February 03, 2024, 12:12:01
Quote from: iooipiop on February 02, 2024, 12:44:20proof or gtfo
LOL. Cry, baby...
For you amateurs, NVidia/AMD left only FP32 and nothing more - this is more than enough for games, but not for scientific calculations.

Your 4090 is garbage from a scientific and business point of view. Only suitable for games.
Posted by Neenyah
 - February 02, 2024, 13:24:13
Quote from: iooipiop on February 02, 2024, 12:44:20
Quote from: NikoB on January 30, 2024, 17:06:07FP64 calculation units are deliberately strangled there
proof or gtfo
Proof.

Posted by iooipiop
 - February 02, 2024, 12:44:20
Quote from: NikoB on January 30, 2024, 17:06:07FP64 calculation units are deliberately strangled there
proof or gtfo
Posted by LL
 - January 30, 2024, 20:34:42
QuoteAll gaming cards are mediocre in professional tasks for a simple reason

You have to revise your definition of "professional" In places i work RTX4090 and others are the norm.
Posted by NikoB
 - January 30, 2024, 17:09:40
Here is an example of the dumbest "AI" from the disgraceful developers of Google - Google Translator, despite the obvious context (accessible to a 4 year old child) and despite the fact that above they have already translated the card correctly, as card (not maps) - they gave out at the end of the translation funny thing - "maps".

Do you all really believe in AI in the near future? )))
Posted by NikoB
 - January 30, 2024, 17:06:07
All gaming cards are mediocre in professional tasks for a simple reason - the FP64 calculation units are deliberately strangled there - the speeds are an order of magnitude lower than in professional cards, and these lines deliberately remove all the necessary extensions in the drivers for OpenGL. Like the VRAM size, it is several times smaller.

Nvidia and AMD corporations clearly monitor the division of niches and only due to some mistake can they temporarily omit high-performance high-precision units for floating point operations.

So the available professional task classes are extremely limited on the game maps. They are simply not suitable for this.
Posted by K
 - January 30, 2024, 04:27:04
This is next level stupidity, the industry should work on an open standard for connecting external PCIE devices at full speed. Or at least making Oculink more wide spread, sigh.
Posted by LL
 - January 29, 2024, 20:12:50
QuoteTo what end?

Well i wanted to this to render in Blender or Unreal with a laptop as a laptop have GPU limitations.

But unfortunately no GPU render engine or unreal in this test was posted.

The author talking about "professional" applications is really old 2000's language about viewport performance of some 3D applications...an really low reason to buy something like this.

This is still interesting review but disappointing that one of most logical cases to use a device like this is not acknowledged: Rendering.
Posted by Aleksei
 - January 29, 2024, 09:34:26
Wait for Thunderbolt 5 and Quite Work with Gigabyte Aorus RTX 4090 Gaming Box :-)
Posted by davidm
 - January 29, 2024, 04:35:14
Quote from: Neenyah on January 29, 2024, 00:01:24
Quote from: davidm on January 28, 2024, 23:48:18OK, well thanks for that. But it's not exactly the same, it would be helpful if typical leading tasks (Stable Diffusion, llama) were front and centre benchmarks, without having to decipher from how many frames per second Lara Croft is rendered. Is there a Lara Croft FPS to llama token/s calculator I'm not aware of?
I'm not an expert in the AI/DL/ML field but I'm slowly learning (not interested too much in it, tbh, that's why I'm snailing; who knows, perhaps I would be faster if there was Lara Croft involved 😋). Judging by the info I gathered so far in recent months, mainly from r/machinelearning, r/egpu, egpu.io and various different YT videos on that matter, you can expect about 5-15% loss over what you'd get with the same GPU in a desktop. With that being said, I believe that used/refurbished 3090 24 GB would do wonders for way lower price than the 4090, giving better price:performance return.

That's what I mean, the degradation for "AI" isn't as bad as it is for gaming.
Algorithms are still being optimized for 4090, but it can be nearly twice as fast as a 3090. That's pretty compelling, though 3090 is cheaper.
Posted by Neenyah
 - January 29, 2024, 00:01:24
Quote from: davidm on January 28, 2024, 23:48:18OK, well thanks for that. But it's not exactly the same, it would be helpful if typical leading tasks (Stable Diffusion, llama) were front and centre benchmarks, without having to decipher from how many frames per second Lara Croft is rendered. Is there a Lara Croft FPS to llama token/s calculator I'm not aware of?
I'm not an expert in the AI/DL/ML field but I'm slowly learning (not interested too much in it, tbh, that's why I'm snailing; who knows, perhaps I would be faster if there was Lara Croft involved 😋). Judging by the info I gathered so far in recent months, mainly from r/machinelearning, r/egpu, egpu.io and various different YT videos on that matter, you can expect about 5-15% loss over what you'd get with the same GPU in a desktop. With that being said, I believe that used/refurbished 3090 24 GB would do wonders for way lower price than the 4090, giving better price:performance return.
Posted by davidm
 - January 28, 2024, 23:48:18
Quote from: Neenyah on January 28, 2024, 23:41:45
Quote from: davidm on January 28, 2024, 23:39:28The fact it is called "gaming" doesn't really mean anything, except it's *marketed* toward gamers. The fact is, especially given its 24GB VRAM, it's a decent chip for AI and creative work. This is notebookcheck, it's not limited to notebooks, it shouldn't be limited to gamers and their one track minds (and I suspect many of them are more obsessed with stats than actual experience).

As I mentioned, the performance characteristics of the entire system are very different for AI/creative.
Well I know David and I agree with you completely, I'm just saying that if the manufacturer is literally in its name calling it "gaming" it would be weird for Notebookcheck to not include gaming performance. They did a great job by covering a lot of possible usages, IMHO, from serious apps to various games. But games are also quite an excellent benchmark for exactly eGPUs because in pro apps you can't really bottleneck eGPU setup as easy and as quickly as with games (where you can see bottleneck in less than 20 seconds as soon as shaders load/cache), so if you pay attention to wider picture you can get to your own conclusions if they [your desired and used apps] weren't mentioned in the article. Again - I agree with you, don't think the opposite.

OK, well thanks for that. But it's not exactly the same, it would be helpful if typical leading tasks (Stable Diffusion, llama) were front and centre benchmarks, without having to decipher from how many frames per second Lara Croft is rendered. Is there a Lara Croft FPS to llama token/s calculator I'm not aware of?
Posted by Neenyah
 - January 28, 2024, 23:41:45
Quote from: davidm on January 28, 2024, 23:39:28The fact it is called "gaming" doesn't really mean anything, except it's *marketed* toward gamers. The fact is, especially given its 24GB VRAM, it's a decent chip for AI and creative work. This is notebookcheck, it's not limited to notebooks, it shouldn't be limited to gamers and their one track minds (and I suspect many of them are more obsessed with stats than actual experience).

As I mentioned, the performance characteristics of the entire system are very different for AI/creative.
Well I know David and I agree with you completely, I'm just saying that if the manufacturer is literally in its name calling it "gaming" it would be weird for Notebookcheck to not include gaming performance. They did a great job by covering a lot of possible usages, IMHO, from serious apps to various games. But games are also quite an excellent benchmark for exactly eGPUs because in pro apps you can't really bottleneck eGPU setup as easy and as quickly as with games (where you can see bottleneck in less than 20 seconds as soon as shaders load/cache), so if you pay attention to wider picture you can get to your own conclusions if they [your desired and used apps] weren't mentioned in the article. Again - I agree with you, don't think the opposite.
Posted by davidm
 - January 28, 2024, 23:39:28
Quote from: Neenyah on January 28, 2024, 23:29:18When manufacturers stop being obsessed with gamers primarily? Gigabyte Aorus RTX 4090 Gaming Box 🤔

From Gigabyte's official product site:

"Powerful GeForce RTX™ 4090 delivers incredible performance for gamers and creators"

"For GAMERs

The AORUS RTX 4090 GAMING BOX transforms an ultrabook laptop into the ultimate gaming rig, delivering incredible performance for real-time ray tracing and graphics-intensive games. A network chip that allows you to connect to a wired network is built into the GAMING BOX. You don't have to worry about transmission interference during the game. Install the GIGABYTE CONTROL CENTER to adjust the RGB lighting and performance for your preference."



Quote from: davidm on January 28, 2024, 23:13:57I would bet a large and increasing proportion of 4090 users are doing AI and creative work.
Literally any current GPU, even with TB over two PCIe lanes, is a massive upgrade over any existing iGPU. Data is shown in the review tho.

Btw, my comment about gaming was related to George's very accurate comment above mine, about the 4090 being too expensive and overkill in terms of price:performance because you get about the same performance with much cheaper GPU. I don't care about AI/ML but my 6800 XT in an eGPU setup with my X1 Carbon is faster in After Effects than a desktop 4060 in a desktop, if that means anything useful to you.

The fact it is called "gaming" doesn't really mean anything, except it's *marketed* toward gamers. There's not really any reason for each company to have 12 different versions of the same product except marketing, but they are each going to perform the same for the different tasks, and that is why many professionals end up buying products emblazoned with "Gaming" and a dragon and lightening bolt or whatever on the package. The fact is, especially given its 24GB VRAM, it's a decent chip for AI and creative work. This is notebookcheck, it's not limited to notebooks, it shouldn't be limited to gamers and their one track minds (and I suspect many of them are more obsessed with stats than actual experience).

As I mentioned, the performance characteristics of the entire system are very different for AI/creative.
Posted by Neenyah
 - January 28, 2024, 23:29:18
Quote from: davidm on January 28, 2024, 23:13:57When is notebookcheck going to grow up and stop being obsessed with "gamers."
When manufacturers stop being obsessed with gamers primarily? Gigabyte Aorus RTX 4090 Gaming Box 🤔

From Gigabyte's official product site:

"Powerful GeForce RTX™ 4090 delivers incredible performance for gamers and creators"

"For GAMERs

The AORUS RTX 4090 GAMING BOX transforms an ultrabook laptop into the ultimate gaming rig, delivering incredible performance for real-time ray tracing and graphics-intensive games. A network chip that allows you to connect to a wired network is built into the GAMING BOX. You don't have to worry about transmission interference during the game. Install the GIGABYTE CONTROL CENTER to adjust the RGB lighting and performance for your preference."



Quote from: davidm on January 28, 2024, 23:13:57I would bet a large and increasing proportion of 4090 users are doing AI and creative work.
Literally any current GPU, even with TB over two PCIe lanes, is a massive upgrade over any existing iGPU. Data is shown in the review tho.

Btw, my comment about gaming was related to George's very accurate comment above mine, about the 4090 being too expensive and complete overkill for what you get, mainly in terms of price:performance because you get about the same performance as with much cheaper GPU due to the Thunderbolt bottleneck. I don't care about AI/ML but my 6800 XT in an eGPU setup with my X1 Carbon is slightly faster in After Effects (CC 2023) than a desktop 4060 in a desktop, if that means anything useful to you.