News:

Willkommen im Notebookcheck.com Forum! Hier können Sie über alle unsere Artikel und allgemein über notebookrelevante Dinge diskutieren. Viel Spass!

Main Menu

AI offensive: Apple will “change the world again,” says its new CEO

Started by Redaktion, April 23, 2026, 15:43:08

Previous topic - Next topic

Redaktion

After nearly 15 years, Tim Cook is handing over the reins at Apple to John Ternus – and Ternus is already outlining major AI plans, saying the company is about to "change the world again." Smart glasses, AirPods with cameras, and new connected gadgets could bring AI more deeply into everyday life in the future.

https://www.notebookcheck.net/AI-offensive-Apple-will-change-the-world-again-says-its-new-CEO.1281466.0.html

2026

"change the world again."
Really? When it was last/first time they did? loool
The only thing they do is steal from the others, repaint it and sell it as a "huge innovation"! 🤣

RobertJasiek

"change the world again" like "join the bad guys with unwanted license terms allegedly allowing AI telemetry"

elverg0otazs

Quote from: 2026 on April 23, 2026, 17:59:48"change the world again."
Really? When it was last/first time they did? loool
The only thing they do is steal from the others, repaint it and sell it as a "huge innovation"! 🤣
Hi, your grammar is bad. Please refrain from commenting again on this website until you get that fixed. Thanks for nothing!


jdrch

He's lying. No one is defeating Nvidia thanks CUDA, which means models run twice as fast on Nvidia GPUs. Secondly, Apple's on-device, privacy 1st model doesn't work in the AI era, in which the most powerful models require a $60K machine to run locally and many advanced features are implementable in the cloud only. On-device AI is useless aside from minor photo edits and search.

RobertJasiek

Quote from: jdrch on Today at 01:32:40CUDA, which means models run twice as fast on Nvidia GPUs.

Or 2.95 times as fast using both CUDA and Tensor cores by means of Nvidia's CUDA, CuDNN and TensorRT libraries.

Quotein which the most powerful models require a $60K machine to run locally

Depends on the model. Hint: not each AI is LLM. Some AIs run meaningfully on a $1K computer.

Quoteand many advanced features are implementable in the cloud only.

Depends on the AI.

QuoteOn-device AI is useless aside from minor photo edits and search.

Or many other AIs you are unaware of. E.g., I use a Go game AI locally and daily.

GeorgeS

Quote from: jdrch on Today at 01:32:40He's lying. No one is defeating Nvidia thanks CUDA, which means models run twice as fast on Nvidia GPUs. Secondly, Apple's on-device, privacy 1st model doesn't work in the AI era, in which the most powerful models require a $60K machine to run locally and many advanced features are implementable in the cloud only. On-device AI is useless aside from minor photo edits and search.

Exactly. On device or at home AI is a false claim & marketing scam.


Tree Hugger

Did miss something recently? I thought Nvidia was only faster for training, that for inferencing there was no difference?

In any case, if any of these Big tech companies want to change the world - maybe stop using it would be a welcome change, as it's destroying our planet.

RobertJasiek

Several aspects affect inferencing speeds but just assuming kinds of cores and their enabling libraries would be the only differences between different manufacturers, Nvidia is 2.95 times as fast for inferencing because, if programmed competently, it can also run on all the kinds of cores on Nvidia GPUs simultaneously while the default OpenCL cannot and only uses CUDA cores then also only naively as if they were ordinary generic cores.

However, Metal for Apple M might not be as weak as, say, AMD if AI inferencing software is specifically redeveloped for Metal and if we limit our choice of Nvidia chips of comparable wafer nodes to those with power consumption as low as of Apple M. Without redevelopment for Metal, it boils down to the slow OpenCL when using Apple.

The difference between training and inferencing is rather the need for more storage and more runtime on given GPU chips. Both training and inferencing profit from more cores, faster cores and faster other hardware, such as CPU and bandwidths.

Quick Reply

Name:
Email:
Verification:
Please leave this box empty:
Shortcuts: ALT+S post or ALT+P preview