News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Apple MacBook Pro 14 2023 M3 Max Review - The fastest CPU in a 14-inch laptop

Started by Redaktion, November 21, 2023, 00:15:27

Previous topic - Next topic

A

Quote from: NikoB on November 25, 2023, 14:46:49Your understanding of politics and what is happening on the planet is about the same as walking to the moon. Relax, your lot is to be a paid low-level bot.
You just keep being a clown and insulting someone who again proved you don't know sh*t about tech.
I don't keep a grudge on you lol.

A

P.S. Ya, you'll also be interested how I smashed your "you can't run GPT models on PCs" bs in other discussion too. Classic.

NikoB

Quote from: A on November 25, 2023, 14:50:16P.S. Ya, you'll also be interested how I smashed your "you can't run GPT models on PCs" bs in other discussion too. Classic.
I haven't laughed so much in a long time. Thank you for being with us.

A

Quote from: NikoB on November 25, 2023, 15:29:45I haven't laughed so much in a long time. Thank you for being with us.
No problem, you are also getting smarter and smarter every day because of our conversations.

Just in a week you've accepted that
a) MiniLED is cool
b) Apple is the best at efficiency today
c) GPTs can be run locally and do not require HUNDREDS OF TERABYTES OF RAM

Maybe it's not all lost for you and one day you will be less of a clown.

RobertJasiek

Quote from: A on November 25, 2023, 15:34:44b) Apple is the best at efficiency today

Apple PR, again by yourself.

(Hint: the most efficient chips depends on software and its usage.)

A

Quote from: RobertJasiek on November 25, 2023, 21:43:36Apple PR, again by yourself.
I'm just repeating after the best Apple PR manager NikoB himself:
Quote from: NikoB on November 25, 2023, 15:21:55rogress in recent years has not been due to a real rapid increase in performance per 1W, as before, but due to a "boost" (doping) in consumption beyond any reasonable limits for this class of devices.

Apple is the best here for obvious reasons - its chips have the best possible process technology on the planet from TSMC, today it is "3nm". And also a reasonable approach to limiting performance by boosting TDP. As a result, their laptops practically do not lose performance on battery compared to running on a power supply and at the same time have greater battery life - try to do the same trick with your Slim and quickly see how much performance it suffers on battery compared to running on a power suppl

RobertJasiek

Hehe, good call! I was only looking for short PR statements but you are right that I must also look for heaps and naive believers in over-generalisations.

A

Quote from: RobertJasiek on November 25, 2023, 21:43:36(Hint: the most efficient chips depends on software and its usage.)
Hard to compare, because M chips can do some stuff you will not be able to do on x86 laptops at all, like 120 billion parameters language model inferences at 56W consumption. I don't think you can buy 100GB VRAM x86 laptop for any kind of money, we aren't even coming to efficiency comparison.

Yeah, you can grab x86 desktop and start cramming several 4090s in it... Do we really want to do efficiency comparison in this case.

So yeah, comparing OpenCL score with CUDA cards and saying "it's just 4070 level of performance" is fun, but in the end...

Toortle

Quote from: A on November 26, 2023, 10:33:21I don't think you can buy 100GB VRAM x86 laptop for any kind of money, we aren't even coming to efficiency comparison.
You can need more RAM you can use an eGPU that will still be faster and cheaper, you can't on a Mac. And you can use 100 GB VRAM on a Mac(Book)?

Quote from: A on November 26, 2023, 10:33:21So yeah, comparing OpenCL score with CUDA cards and saying "it's just 4070 level of performance" is fun, but in the end...
In the end? Gets to be slower in all variants including M3 Max than equivalent PC and even slower than M1s unless you go with the M3 Max?

M3 Pro vs RTX 4080m Laptop- Blender and Resolve Multicam Timeline Tests + Object Tracking + Timeline ➡➡ youtube.com/watch?v=7MC8WV-6Z_E

M3 Max Benchmarks with Stable Diffusion, LLMs, and 3D Rendering ➡➡ youtube.com/watch?v=YN4jFm-Eg6Q


A

Quote from: Toortle on November 26, 2023, 11:16:38You can need more RAM you can use an eGPU that will still be faster and cheaper, you can't on a Mac.
Which eGPU specifically will give you that much VRAM?
Quote from: Toortle on November 26, 2023, 11:16:38And you can use 100 GB VRAM on a Mac(Book)?
Yep, thanks to unified RAM almost all RAM can be used as VRAM.
Quote from: Toortle on November 26, 2023, 11:16:38Gets to be slower in all variants including M3 Max than equivalent PC and even slower than M1s unless you go with the M3 Max?
Yeah, that's what I'm talking about. Just cram several unrelated facts without context and put them into video. How about generating 4K images in Stable Diffusion on that x86 laptop, that requires 30-40Gb+ VRAM? 8K? 16K? Yeah, x86 laptop simply can't do that at all. Speed doesn't even matter in the end. Also he is using probably the worst possible app for SD generations.

Context for Blender is it's an open-source app, adapts to architecture changes very slowly, favors x86 platform. Apple even had to submit their own code to add support back in M1 days.


Toortle

Quote from: A on November 26, 2023, 11:48:20
Quote from: Toortle on November 26, 2023, 11:16:38M3 Pro vs RTX 4080m
Btw unfair comparison. Pro is half the GPU.
How is it unfair? You said this:

Quote from: A on November 26, 2023, 11:35:08Yep, thanks to unified RAM almost all RAM can be used as VRAM.

That MacBook in the vid has 18 GB of shared memory, 6 GB more than the RTX 4080 mobile. Why is it incapable to dedicate all 18 GB (as you claim) to VRAM and easily outperform that 4080 with is 12 GB?

A

Quote from: Toortle on November 26, 2023, 11:52:21How is it unfair? You said this:
I said that x86 laptops will not be able to run big language models. M3 Pro is able to run them, but comparing its GPU _performance_ to 4080 is unfair, because it's half the GPU.

Quote from: A on November 26, 2023, 11:35:08Yep, thanks to unified RAM almost all RAM can be used as VRAM.
That MacBook in the vid has 18 GB of shared memory, 6 GB more than the RTX 4080 mobile. Why is it incapable to dedicate all 18 GB (as you claim) to VRAM and easily outperform that 4080 with is 12 GB?
[/quote]
Because Apple prioritises system responsiveness, so you can't allocate all RAM to GPU and you can't allocate all RAM bandwidth to any single module. Limits are like this:
32GB MBP - Max 24GB VRAM - 34B LLM models
64GB MBP - Max 48GB VRAM - 70B LLM models
120GB MBP - give or take 100GB VRAM - 120B or 3bit quantized 180B LLM models
It's not about _outperforming_ it's about being unable to run big models on 4080 on that laptop at all. People are building rigs of two 4090s to run 70B models. So yeah, if you can tun it and 4080 can't - it's outperforming.

Quote from: Toortle on November 26, 2023, 11:16:38Gets to be slower in all variants including M3 Max than equivalent PC and even slower than M1s unless you go with the M3 Max?
He was using an incorrect way of testing LLMs, some crappy appstore app with a small 7B LLM model that actually never loaded M Soc enough to see the difference. He should be using llama.cpp with heavy models.

A

We can be more specific, if you wish, grab your x86 laptop and try running guanaco 65B LLM at 5 bit quantization and beat 12 tokens per second speed M1Max does.

I think you will fail on "try running" step.

A

youtube.com/watch?v=jaM02mb6JFM

Or you can just watch a video of how 4090 is faster on 7B model (noone actually uses those)
slightly faster on 13B model (probably around the best you can run on x86 laptop)
and M Max RAPES 4090 on 70B model (which is de-facto most balanced for running locally today) - and he explains that building x86 rig able to run it on GPU is worth more than M Max and uses way more power than M Max.

Fun starts at 5:47

At 8:52 you can also see performance/power charts that are quite crucial in AI productivity, 4090 loses.

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview