News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Apple MacBook Pro 14 2023 M3 Max Review - The fastest CPU in a 14-inch laptop

Started by Redaktion, November 21, 2023, 00:15:27

Previous topic - Next topic

Plum

I'm not using effects, but will try the lightning thing as well, thanks!

Could be that it happens under Windows as well, but I don't notice it to that extent on my Windows machine.

On Windows it also wouldn't affect me that much, as I'm not able to get over full working day anyways. I had hope with the Mac though.


A

Quote from: Plum on November 28, 2023, 14:30:43Could be that it happens under Windows as well, but I don't notice it to that extent on my Windows machine.
Check the CPU usage when you will experience it next time.

NikoB

None of you have or will ever have access to the source code of large, truly valuable neural network models. Access to their API is always paid.

A request to a really large model is extremely energy-intensive with modern available technologies. This is the second factor that only leads to paid access.

No household PC is capable of executing these models, even if the source code were stolen - their resources are simply not enough. And it won't be able to in the next 15 years for sure.

Therefore, the light version of "AI" in your pocket will not be earlier than 2050, and even then this is rather an optimistic forecast. Most likely no earlier than 2070. Provided that civilization continues to develop and does not collapse during this time to a more primitive one.

A



NikoB

Quote from: A on November 27, 2023, 11:59:35So of you cherry pick benchmarks you can make either of these two look bad or very bad. Reality is they are roughly on par with Apple Silicon being more of a LAPTOP and x86 being a PORTABLE DESKTOP, like that $4300 "laptop" the other guy advertised me yesterday, with 4hr web surfing battery life.
This line from stupid chatbot A is especially funny.

A $1200 laptop with a 7945HX+4060 is significantly faster than the $8,000 top-end Mac 2023. And that's it. This is a sentence. =)

A

Quote from: NikoB on November 28, 2023, 20:05:59CharBot A, turn off. This forum for people.
Exactly, clowns like you are not welcome.

Quote from: NikoB on November 28, 2023, 17:34:09A request to a really large model is extremely energy-intensive with modern available technologies.
Do your homework before posting, ChatGPT requests are 1-10 Watts. Or you thought whole server rack is working on your requests? ))) At least do some googling.

Quote from: NikoB on November 28, 2023, 17:34:09No household PC is capable of executing these models, even if the source code were stolen - their resources are simply not enough. And it won't be able to in the next 15 years for sure.
Go to r/localllama subreddit, tell them, lol.

Quote from: NikoB on November 28, 2023, 20:10:07
Quote from: A on November 27, 2023, 11:59:35So of you cherry pick benchmarks you can make either of these two look bad or very bad. Reality is they are roughly on par with Apple Silicon being more of a LAPTOP and x86 being a PORTABLE DESKTOP, like that $4300 "laptop" the other guy advertised me yesterday, with 4hr web surfing battery life.

Quote from: NikoB on November 28, 2023, 20:10:07A $1200 laptop with a 7945HX+4060 is significantly faster than the $8,000 top-end Mac 2023. And that's it. This is a sentence. =)
Lol, not even going to ask where did you get $4300 laptop for $1200 and what is "$8000 top-end Mac 2023".

NikoB

Quote from: A on November 28, 2023, 20:15:59Lol, not even going to ask where did you get $4300 laptop for $1200 and what is "$8000 top-end Mac 2023".
Dude, they are freely available in a bunch of stores from China.

Can you imagine how upset Apple fans burning here? They spend a lot of money on the top 2023 model, and end up with a weak 2022 computer, instead of a powerful x86 2023 with AMD+NVidia literally for pennies compared to Apple prices...

A

Quote from: NikoB on November 28, 2023, 17:34:09No household PC is capable of executing these models
180B LLM (10B parameters more than GPT3) runs on
any PC with 2xA100 80GB (need 2 because of VRAM)
=OR=
Apple Silicon Mac with 192Gb RAM

youtube.com/watch?v=Zm1YodWOgyU

You are now 4x smarter after reading this message, NikoB.

Quote from: NikoB on November 28, 2023, 20:27:48They spend a lot of money on the top 2023 model, and end up with a weak 2022 computer, instead of a powerful x86 2023 with AMD+NVidia literally for pennies compared to Apple prices...
I'm not even gonna discuss this sh*t you made up (from $1200 laptop beating anything to 'China iz chep') because it's obviously not true and you'll end up just butthurting and insulting me. Do your own homework.

NikoB

Once again for the stupid - really valuable neural network models are all closed and you will never get them in the public domain. Really powerful ones (that is, useful models at the expert level) require HBM memory of tens of terabytes, at a minimum. Not to mention the disk space to download them.

Tell the grandmothers on the benches how you twirl heavy models on the poppies. This only causes homeric laughter and nothing more.

Quote from: A on November 28, 2023, 20:31:27I'm not even gonna discuss this sh*t you made up (from $1200 laptop beating anything to 'China
You make me laugh, pathetic bot, this is L5Pro from Lenovo. Can you imagine what a shame it is for Apple that their top-end $9000 Mac is inferior in processor speed to a $1200 сhinese laptop? =)

A

Quote from: NikoB on November 28, 2023, 21:05:50really valuable neural network models
Luckily we have real LLM benchmarks and most of them say Falcon-180B is between GPT3.5 and GPT4. You are just dumb and have no idea, admit it already. Just google 'falcon-180b', clown.

Quote from: NikoB on November 28, 2023, 21:05:50closed and you will never get them in the public
Open-source community always has models that are on par with prev generation of commercial models or sometimes even better, e.g. Stable Diffusion is better than paid Midjourney.

Quote from: NikoB on November 28, 2023, 21:05:50Not to mention the disk space to download them.
Lol, so you didn't even google, right? Actually it;s about the level of IT company janitor to think model carries ober whole training set along. I will surprise you, but falcon-180B is around 300Gb or even less quantized. Rule of thumb is model file size = model VRAM requirements, give or take.

Quote from: NikoB on November 28, 2023, 21:05:50L5Pro from Lenovo
Slower CPU, on par GPU, worse efficiency, worse battery life. We done or you'll go on and will keep posting your bs "no its not" "no its not"?

NikoB

Quote from: A on November 28, 2023, 21:15:38Open-source community always has models that are on par with prev generation of commercial models or sometimes even better, e.g. Stable Diffusion is better than paid Midjourney.
Keep believing it, stupid little boy. Once you really grow up, you learn that the world is a little different...

Quote from: A on November 28, 2023, 21:15:38I will surprise you, but falcon-180B is around 300Gb or even less quantized.
Again funny to tears. Again the blind faith that a little boy can have everything...

Quote from: A on November 28, 2023, 21:15:38Slower CPU, on par GPU
Maybe you should still see your family doctor? Cinebench R15 - 5200-5400 (7945HX) vs M3 Max 3200-3300 в Turbo mode. 1200$ vs 9000$...

A

Quote from: NikoB on November 28, 2023, 21:36:57Keep believing it, stupid little boy. Once you really grow up, you learn that the world is a little different...
There's no need to 'believe', it's widely acclaimed lol

Quote from: NikoB on November 28, 2023, 21:36:57Again funny to tears. Again the blind faith that a little boy can have everything...
You can literally download it yourself at Huggingface.com and see the file size with your own eyes )))))

Quote from: NikoB on November 28, 2023, 21:36:57Maybe you should still see your family doctor? Cinebench R15 - 5200-5400 (7945HX) vs M3 Max 3200-3300 в Turbo mode. 1200$ vs 9000$...
Cinebench is built on Intel library Embree with a decade of hand optimization for x86. Plus it's testing a task that never happens in real world, no one renders on CPU in Redshift. See Geekbench + Wildlife Extreme gaming tests, don't forget to plug your x86 laptop out and test too.

Geekbench
7945HX 2696/15248 - and less when plugged off
M3 Max 3078/21127 - same when plugged off

Wildlife Extreme
4060 - 19185 - and less when plugged off
M3 Max - 31268

Display is night and day, battery life night and day, efficiency night and day. And m3Max is $3500, not $8000. Don't skip your pills, guy with $4300 laptop was at least better on GPU.


Plum

@A Not only Russians are using those letters.

As to why you are not surprised? Probably because you are trying to confirm your own biases... Hard to surprise yourself.

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview