News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Post reply

The message has the following error or errors that must be corrected before continuing:
Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.
Other options
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview

Topic summary

Posted by Mattyn75
 - December 31, 2023, 10:04:01
1. A single 8 gpu server is not the be all and end all, I'd have expected Apple to have a DGX farm
2. The A100 is not the peak performer by a long shot. H100s poo all over these and even the L40s is much better bang for buck with most AI related workloads
Posted by A
 - December 30, 2023, 20:40:46
Quote from: RobertJasiek on December 30, 2023, 20:28:16Modern AI need no opening feeding.
Oh they do, why not? If each move is time-limited first moves will be the worst calculated. Every bit of theory helps.
Posted by RobertJasiek
 - December 30, 2023, 20:28:16
Modern AI need no opening feeding.
Posted by A
 - December 30, 2023, 19:07:34
Quote from: RobertJasiek on December 30, 2023, 18:58:28There have also been comparatively new algorithmic AI breakthroughs during the last 20 years.
AI stuff is getting old in 3.

Quote from: RobertJasiek on December 30, 2023, 18:58:28Go AI are essentially explicit-Go-theory agnostic
No game openings?
Posted by RobertJasiek
 - December 30, 2023, 18:58:28
There have also been comparatively new algorithmic AI breakthroughs during the last 20 years.

Go AI are essentially explicit-Go-theory agnostic, and actually this has turned out to be a strength compared to previously greater emphasis on (non-mathematical) expert knowledge. (With a very few exceptions. Implicit Go theory of modern AI perceived by strong humans has many similarities to human Go theory though.)
Posted by A
 - December 30, 2023, 18:49:32
Quote from: RobertJasiek on December 30, 2023, 18:25:44Now, it is a mixture with other modules, such as pruned tree walks ('tactical reading').
It's more like they've started adding in older algorithms that were non-viable with normal move evaluation. So yeah, of course it's not 100% NN - NN-only engines can't win, they blunder and don't know theory.
Posted by RobertJasiek
 - December 30, 2023, 18:25:44
@A, Go NN is not the simple 'learn from the past' it was a decade ago with about amateur 3 dan level then. Now, it is a mixture with other modules, such as pruned tree walks ('tactical reading'). Granted, it is much better than brute-force or alpha-beta but definitely still complex enough to profit from aeons of analysis on the current position.

(Phew, lucky that you have not asked me to solve P =? NP :) )
Posted by Why tho
 - December 30, 2023, 18:00:40
I think I'm beginning to understand why people say nobody gives a sheet about Ai. 192 GB to inference decent models locally well? That's some sick joke. Nobody is gonna give a damn about this stuff (besides these corporate enterprise companies running in cloud). Doesn't matter if they're on x86, arm, pc vs mac or using dGPUs. The average person isn't gonna be buying more than 32 GB ram (or 48 GB if including vram). So unless these companies can reduce the size of their algorithms / models or some how use SSD as cache for additional memory -- this is bullsheet as far as I'm concerned.
Posted by Justanoldman
 - December 30, 2023, 17:13:46
I can tell this article was not written by GPT because of the low quality of writing.
Posted by A
 - December 30, 2023, 16:50:48
Quote from: RobertJasiek on December 30, 2023, 16:23:17So what are your two questions you think I cannot answer?
It was one question and it's already stated there.

Quote from: RobertJasiek on December 30, 2023, 16:23:17Go move generation needs indefinite time
That's IF you try calculating that move. NN isn't calculating. NN is "predicting" the move evaluation based on its "previous experience". Back in the day to evaluate a move you had to go through all moves after it and calculate the function of all possible outcomes. Worked for chess. Not much for Go.
So instead of infinite complexity calculation of best move you are using NN to predict the best move, which is a finite complexity task (and actually is using a very simple math behind the scenes). So computation complexity is going down, it's not infinite anymore. Even more, every NN run is ideally not only a finite-time computing task, but takes similar (equal) time for every inference.
Posted by RobertJasiek
 - December 30, 2023, 16:23:17
So what are your two questions you think I cannot answer?

Hint: Image or Language AI needs some estimated fixed (upper limit of) time to produce the result of a desired quality. Go move generation needs indefinite time for an increasingly well chosen, with higher confidence selected move. This is so because the complexity of 19x19 Go is orders of magnitude larger than the number of particles in the universe. Go AI of the neural net kind does not use any (of, e.g., my) mathematical theorems on Go theory, which provide immediate solutions only for a few specialised (classes of, e.g., late endgame) positions.
Posted by A
 - December 30, 2023, 15:07:47
P.S. Not even asking you to beat the price of 96-128GB MBP with x86 laptop GPU because there's simply no laptop with that amount of VRAM, so it's very inconvenient for you and you will immediately jump back to desktop 4090s.
Posted by A
 - December 30, 2023, 15:02:26
Quote from: RobertJasiek on December 30, 2023, 13:12:45Such is false as a general statement because it depends on the exact hardware choice etc. whether Mac or PC is more expensive
We've discussed it just yesterday, try and beat $5600 192GB Mac Studio with 4090s.
Posted by A
 - December 30, 2023, 14:29:11
Quote from: RobertJasiek on December 30, 2023, 13:12:45Computational time and space complexities always depend on the algorithms
Oh, just don't start, because I will ask you two main properties of neural networks (one of them actually goes right against your "utter nonsense" claim) and you will lose the debate immediately. If you at all can call "debate" discussion between someone who is working on NNs and someone who simply uses one Go AI, that can be seemingly inferenced locally even on iPhone. )

Quote from: RobertJasiek on December 30, 2023, 13:12:45studied theoretical informatics
"Studied theoretical informatics" ))) You really should've deduced by now I'm an active IT professional.

Quote from: RobertJasiek on December 30, 2023, 13:12:45Provided running the AI software is possible on a Mac at all and, if it is, not too slow.
It is definitely not slower than running on CPU when you've ran out of GPU VRAM.

Quote from: RobertJasiek on December 30, 2023, 13:12:45Apple PR identified. By you again;)
Lol what, it's just an english idiom.
merriam-webster.com/dictionary/sweet spot
Merriam-Webster dictionary PR.

Quote from: RobertJasiek on December 30, 2023, 13:12:45I calculated such for one A100 (rent 3€/h) and found that building one's computer and paying power (in Germany 0.3€/h) is very much cheaper. (It was a quick and dirty calculation, so I did not bother to save it.)
Renting A100 is actually $2/hr, electricity just for fully loaded GPU itself will be ±650EUR/yr. Buying A100 and making a rig still is around the same or more money as renting it for about a year.
Posted by RobertJasiek
 - December 30, 2023, 13:12:45
Quote from: A on December 30, 2023, 11:30:42local inferences are not computationally expensive at all, VRAM is all that matters.

Utter nonsense! Computational time and space complexities always depend on the algorithms! Take this from somebody having also studied theoretical informatics and applying time-complex AI every day.

Quoteyou can run any kind of AI that fits into Mac RAM and doesn't fit into GPU VRAM,

Provided running the AI software is possible on a Mac at all and, if it is, not too slow.

Quote64 and 192GB RAM where mac is cheaper than getting a rack of 4090s.

Such is false as a general statement because it depends on the exact hardware choice etc. whether Mac or PC is more expensive.

Quote"sweet spot"

Apple PR identified. By you again;)

QuoteWe are talking about local inferences.

Indeed. (As if I did not know, LOL.)

QuoteBuying 8 of them will be around the same price tag or more. That's like $10K-18K per A100 plus server components plus power bill.

I calculated such for one A100 (rent 3€/h) and found that building one's computer and paying power (in Germany 0.3€/h) is very much cheaper. (It was a quick and dirty calculation, so I did not bother to save it.)