Quote from: RobertJasiek on December 30, 2023, 20:28:16Modern AI need no opening feeding.Oh they do, why not? If each move is time-limited first moves will be the worst calculated. Every bit of theory helps.
Quote from: RobertJasiek on December 30, 2023, 18:58:28There have also been comparatively new algorithmic AI breakthroughs during the last 20 years.AI stuff is getting old in 3.
Quote from: RobertJasiek on December 30, 2023, 18:58:28Go AI are essentially explicit-Go-theory agnosticNo game openings?
Quote from: RobertJasiek on December 30, 2023, 18:25:44Now, it is a mixture with other modules, such as pruned tree walks ('tactical reading').It's more like they've started adding in older algorithms that were non-viable with normal move evaluation. So yeah, of course it's not 100% NN - NN-only engines can't win, they blunder and don't know theory.
Quote from: RobertJasiek on December 30, 2023, 16:23:17So what are your two questions you think I cannot answer?It was one question and it's already stated there.
Quote from: RobertJasiek on December 30, 2023, 16:23:17Go move generation needs indefinite timeThat's IF you try calculating that move. NN isn't calculating. NN is "predicting" the move evaluation based on its "previous experience". Back in the day to evaluate a move you had to go through all moves after it and calculate the function of all possible outcomes. Worked for chess. Not much for Go.
Quote from: RobertJasiek on December 30, 2023, 13:12:45Such is false as a general statement because it depends on the exact hardware choice etc. whether Mac or PC is more expensiveWe've discussed it just yesterday, try and beat $5600 192GB Mac Studio with 4090s.
Quote from: RobertJasiek on December 30, 2023, 13:12:45Computational time and space complexities always depend on the algorithmsOh, just don't start, because I will ask you two main properties of neural networks (one of them actually goes right against your "utter nonsense" claim) and you will lose the debate immediately. If you at all can call "debate" discussion between someone who is working on NNs and someone who simply uses one Go AI, that can be seemingly inferenced locally even on iPhone. )
Quote from: RobertJasiek on December 30, 2023, 13:12:45studied theoretical informatics"Studied theoretical informatics" ))) You really should've deduced by now I'm an active IT professional.
Quote from: RobertJasiek on December 30, 2023, 13:12:45Provided running the AI software is possible on a Mac at all and, if it is, not too slow.It is definitely not slower than running on CPU when you've ran out of GPU VRAM.
Quote from: RobertJasiek on December 30, 2023, 13:12:45Apple PR identified. By you again;)Lol what, it's just an english idiom.
Quote from: RobertJasiek on December 30, 2023, 13:12:45I calculated such for one A100 (rent 3€/h) and found that building one's computer and paying power (in Germany 0.3€/h) is very much cheaper. (It was a quick and dirty calculation, so I did not bother to save it.)Renting A100 is actually $2/hr, electricity just for fully loaded GPU itself will be ±650EUR/yr. Buying A100 and making a rig still is around the same or more money as renting it for about a year.
Quote from: A on December 30, 2023, 11:30:42local inferences are not computationally expensive at all, VRAM is all that matters.
Quoteyou can run any kind of AI that fits into Mac RAM and doesn't fit into GPU VRAM,
Quote64 and 192GB RAM where mac is cheaper than getting a rack of 4090s.
Quote"sweet spot"
QuoteWe are talking about local inferences.
QuoteBuying 8 of them will be around the same price tag or more. That's like $10K-18K per A100 plus server components plus power bill.