Recent posts
#1
Last post by Brian V - Today at 02:14:54
Nothing like finishing a hard days work and feeling like you've just driven 400 miles as well (just laugh and move on, lol)
#5
Last post by Dennis Vötus - Today at 00:17:49
Bitte den Kopf einschalten.
Wie sollen aus 4,7 + 4,7 mm 9,2 mm entstehen können?
#6
Last post by Redaktion - Yesterday at 23:41:32
#7
Last post by Redaktion - Yesterday at 22:43:08
#8
Last post by RobinLight - Yesterday at 22:39:42
Hatte ich vor 20 Jahren schon von Sennheiser. Ist gefloppt.
#9
Last post by RobinLight - Yesterday at 22:37:00
Zumal OnePlus ja nun dem Anschein nach den EU Markt aufgibt. Also solltet ihr auch die Marke für eure Berichte aufgeben.
#10
Last post by RobertJasiek - Yesterday at 22:34:58
Several aspects affect inferencing speeds but just assuming kinds of cores and their enabling libraries would be the only differences between different manufacturers, Nvidia is 2.95 times as fast for inferencing because, if programmed competently, it can also run on all the kinds of cores on Nvidia GPUs simultaneously while the default OpenCL cannot and only uses CUDA cores then also only naively as if they were ordinary generic cores.
However, Metal for Apple M might not be as weak as, say, AMD if AI inferencing software is specifically redeveloped for Metal and if we limit our choice of Nvidia chips of comparable wafer nodes to those with power consumption as low as of Apple M. Without redevelopment for Metal, it boils down to the slow OpenCL when using Apple.
The difference between training and inferencing is rather the need for more storage and more runtime on given GPU chips. Both training and inferencing profit from more cores, faster cores and faster other hardware, such as CPU and bandwidths.