Quote from: davidm on February 18, 2024, 13:03:45full scale *local* AI. [...] where Apple [...] with their unified memory design Apple silicon is years ahead of x86 for this type of application.
Your statement is absolutely wrong! The truth is: It depends on the (e.g., AI) software, its usage and for both on the needed amounts of RAM, VRAM or Unified Memory. If any of the volatile storage is insufficient for the software in its usage, execution is very slow or impossible. If volatile storage is sufficient for the software in its usage, it depends on a) whether the software is available for the hardware, OS and libraries, b) the kind of cores are particularly suitable for the software and c) the libraries for the cores are particularly suitable for the software.
There are examples of interesting versions of AI software, such as LLM, for which insufficient VRAM prevents execution while Unified Memory, if it is sufficiently large, enables execution. There are also Nvidia server GPUs or servers with many such GPUs with enough VRAM for such LLMs.
There are examples of AI softwares, such as Katago, that run mainly on RAM, hardly need VRAM, profit from Nvidia tensor and CUDA cores and profit from Nvidia libraries for these cores. Such softwares are dozens of times faster on x64 than Apple M, which lacks TDP, suitable cores and suitable libraries for such suitable cores. In your words, x64 with Nvidia GPUs is many years ahead of Apple M for such softwares.