News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Intel Meteor Lake Analysis - Core Ultra 7 155H only convinces with GPU performance

Started by Redaktion, December 14, 2023, 16:11:36

Previous topic - Next topic

Dan6

Don't want to buy Meteor Lake, but I need a new laptop this year and there are just no goodquality laptops with Ryzen in my region with 32gb of ram.. Laptops with Intel cpus are available with 32 gigs and/or not soldered ram, but not AMDs. So even despite Ryzen being better in my opinion, I'll have to go with Intel.. as simple as that..

Don't Fear the Future

Something seems way off with the results of the Core Ultra 7 when scaling from 28 watts to 45 watts.

Since the TDP for this Chip is 28 watts, with a score of 14073 at 45 watts, the score at 28 watts should be about 11,950.

No chip gets more efficient when going beyond their Base TDP.

The rule of thumb is if you double the watts past a chips TDP, you get only about a 25% improvement in performance.  If you calculate the scores from your 28 to 45 watts graph, all the results of all the other chips show the roughly 25% improvement in performance while almost doubling the Watts past TDP (28 watts); like I talked about.  All except the Core Ultra.  The Core Ultra somehow seems to be more efficient when going past the it's TDP.  That can't be the case.  Therefore, there seems to be something wrong with the Score of the Meteor Lake at 28 and 35 watts.  (Drivers / Firmware issues? )

Note, I'm not blaming the results from Notebookcheck here.  I'm thinking there is some kind of flaw with the computer here.


Sharath Naik

There is clearly something wrong with the laptop the way it is using the CPU. The power efficiency gap starts to close rapidly as it gets to 45 watts. This tells me that there is a problem with low power 28watt configuration. There is no doubt if they close the gap at 28 watts from the current 30% behind from 7840u to closer to 10% behind (Bios or microcode fix) then meteor lake becomes a real alternative. But as it stands now that 30% lower efficiency at 28watt makes it a NO GO as an alternative choice.

Battlemage w lots VRAM,ty

If it's true that INTEL is only 30% less efficient in iGPU compared to AMD's 7840U, then that's not too bad and gives hope that INTEL's Battlemage GPU could be good (INTEL, for local inference, give it a lot of RAM [to compensate for in whatever you are behind AMD and NGREEDIA].
But in CPU, INTEL is 95% to 130% less efficient than AMD's 7840U, this is quite huge. Also Gamers Nexus' recent vid shows that in desktop CPU, INTEL is 2 (100%) to 3 (200%) times less efficient than AMD. Efficiency in stationary machines matters especially for server providers, so INTEL must improve their efficiency [basically their lithography] and they know it [as they recently have bought ASML's newest EUV machines].

vladk

Something wrong with those ultra cores

Gpu improved but still not great, and useless for my use ( web, code , office work)
The AI is still some trendy useless b.s. to me, and it's even more true with windows, only used for some background stuff with cam. Maybe some use in the future, it's still a maybe.

Power consumption I read is clearly nor there, and it was the biggest promise of this 13th generation...

Overall, it could even be worse about power than my good old 1165g7.

For the competition, windows arm and their Qualcomms are always far from being ready for anything. Have some hope more there than into Intel's now. AMD has been only a cheaper alternative to Intel to me, not a better one.

I don't want to switch to Apple and their ultra expensive Wallen garden either.

Then, next Intel's  manufacturing 3 is for servers. we will have to deal with this Intel manufacturing 4 for ultras for some time. 

Do we hit the Moore's barrier so that no improvement could be done ? I don't think so, I think Intel is sleeping again, due to  a lack of competition.


NikoB

Quote from: vladk on December 29, 2023, 09:18:35nd useless for my use ( web, code , office work)
You probably made a mistake when typing the text, because... It is in these areas that built-in gpus are more than redundant.
The problem is that until now, x86 consumption when decoding video is several times higher than the consumption of smartphones, i.e. x86 is extremely inefficient in this regard.

This is what we have to fight against.

Moreover, sneaky Google (first of all) deliberately disables hardware decoding in new browsers for W10. I recently updated a 2019 laptop with Zen+ (3500U), installed 2023 drivers, the latest from AMD and the latest Chrome browser version 120 (and Chrome has always had a minimal load on the cores and the built-in gpu, compared to other browsers, except Edge, previously, but Edge has also switched to Chromium and therefore no longer makes sense to use) and what did I find in the M$ W10 Pro installed from the ISO from the site (and under LTSC 1809) - in the GPU task manager tab, now when playing a video there is no load in the Video Decode window (both in Pro and LTSC, but it was visible before), only 3D works, and the total load on the cpu+gpu has increased significantly. I install back the AMD driver 2020 (May) and the Chrome browser version 99 - everything works fine, with Chrome up to about version 100, then the api for working with VLD decoders through the layer for browsers (MF wrapper for VP9, which is built into Home/ProEducation) is apparently disabled ),
but it is missing out of the box in LTSC versions, so it must be installed from the MS application store - VP9 Video Extensions (by the way, it is paid there, but there is a free version, which few people know about).
Google is now using a different scheme that refuses to work with normal 2019 hardware (Lenovo supports this series of laptops until 2026).

In Firefox, hardware acceleration still works, but there, traditionally, the load even with hardware acceleration is much higher than in Chrome. Firefox is only good for software video decoding (on machines where there is no hardware support for VP9 decoders) - then the load on the processor and gpu is sharply lower than with software decoding in any version of Chrome. Therefore, on machines without hardware VP9, I always use Firefox (besides, it disables the incorrect muddy font smoothing, unlike Chrome from version 50, where it cannot be disabled and is much better for the eyes) and recommend it to friends who have old hardware without a hardware VP9 decoder .

The funny thing is that on an Intel c i5 8300H with Intel 2018 drivers, all versions of Chrome work normally with hardware video decoding on YouTube in VP9, on the gpu built into the processor. Both under W10Pro and LTSC 2019/2021.

That is, the fact that Chrome does not work correctly with built-in AMD 2019 gpus, even with the latest versions of drivers from 2023 and the latest version of Chrome under W10, is both AMD and Google's fault.

Intel also had its history of meanness - for example, earlier in 2010 they deliberately disabled the DVXA/DVXA2 decoder in XP (although everything was actually supported there), as a result, hardware acceleration did not work in XP even in players. And in W7 it worked disgustingly. Intel actually forced customers to upgrade to W7 when they didn't need to.

Since then, nothing much has changed, there are constantly secret agreements and meanness to refuse support for still current (and officially supported OS versions) in order to deliberately squeeze people into OS versions that are more profitable for them, now under the crooked and buggy W11 (which has up to there is still no stable build of the kernel, because they still have not released the LTSC version)

Quote from: vladk on December 29, 2023, 09:18:35Do we hit the Moore's barrier so that no improvement could be done ?
We are almost there; Moore's "law" has long since died. The performance growth curve per 1W has been flatter for several years now compared to what it was 10-15 years ago.


mixed bag

Meteor lake is mixed bag right now.

Still many margin of improvements, as all those tons of optimizations realty on software using those new AI and soc tiles.

Guess windows is a better candate than Linux about updating and using drivers.

Still I think I will buy one, after all that's how apple did when swapping to their new ARM architecture, and it was a success. If users embrace it, it will work.

Still it looks like Intel 4 etching process is not as good as expected. I read it's all EUV for all layers, it should give better results, at least on par with TSMC. Or is it because Intel have to deal with all their legacy cisc and previous instructions set and superscalar good in their time stuff, then that would be a major concern for them. Better power consumption than the H series for sure, still 50% too low performance per watt compared AMD or apple which are on TSMC process. But yeah I prefer Intel and have a real hate for apple Wallen garden ( especially on software), price ( paying for ram like nuts), and privacy ( they just hide things  better...)

A

Quote from: mixed bag on December 31, 2023, 11:26:26apple Wallen garden ( especially on software)
It's about iOS (and it's a good thing for mobile device actually, keeps users protected and developers fed).
You can run anything on Mac, including Windows. Back in Intel days you could even use Windows as main OS.

Quote from: mixed bag on December 31, 2023, 11:26:26privacy ( they just hide things  better...)
Have no evidence = they just hide it better? Okay.

RobertJasiek


A

Quote from: RobertJasiek on December 31, 2023, 12:07:49Read and understand their iCloud terms
You can disable everything iCloud completely, it has some useful features under the same branding though, like generating you random email addresses or serving you Apple VPN. Also "Find my device" is also behind iCloud branding, but everything can be disabled.

So "iCloud terms" are not for iCloud as in "iCloud Drive", they are for all Apple online services using the brand name.

MacOS/iOS lets you turn off telemetry on one of first screens during device setup.

mixed bag

Quote from: NikoB on December 30, 2023, 16:04:44
Quote from: vladk on December 29, 2023, 09:18:35nd useless for my use ( web, code , office work)
You probably made a mistake when typing the text, because... It is in these areas that built-in gpus are more than redundant.
The problem is that until now, x86 consumption when decoding video is several times higher than the consumption of smartphones, i.e. x86 is extremely inefficient in this regard.

This is what we have to fight against.

Moreover, sneaky Google (first of all) deliberately disables hardware decoding in new browsers for W10. I recently updated a 2019 laptop with Zen+ (3500U), installed 2023 drivers, the latest from AMD and the latest Chrome browser version 120 (and Chrome has always had a minimal load on the cores and the built-in gpu, compared to other browsers, except Edge, previously, but Edge has also switched to Chromium and therefore no longer makes sense to use) and what did I find in the M$ W10 Pro installed from the ISO from the site (and under LTSC 1809) - in the GPU task manager tab, now when playing a video there is no load in the Video Decode window (both in Pro and LTSC, but it was visible before), only 3D works, and the total load on the cpu+gpu has increased significantly. I install back the AMD driver 2020 (May) and the Chrome browser version 99 - everything works fine, with Chrome up to about version 100, then the api for working with VLD decoders through the layer for browsers (MF wrapper for VP9, which is built into Home/ProEducation) is apparently disabled ),
but it is missing out of the box in LTSC versions, so it must be installed from the MS application store - VP9 Video Extensions (by the way, it is paid there, but there is a free version, which few people know about).
Google is now using a different scheme that refuses to work with normal 2019 hardware (Lenovo supports this series of laptops until 2026).

In Firefox, hardware acceleration still works, but there, traditionally, the load even with hardware acceleration is much higher than in Chrome. Firefox is only good for software video decoding (on machines where there is no hardware support for VP9 decoders) - then the load on the processor and gpu is sharply lower than with software decoding in any version of Chrome. Therefore, on machines without hardware VP9, I always use Firefox (besides, it disables the incorrect muddy font smoothing, unlike Chrome from version 50, where it cannot be disabled and is much better for the eyes) and recommend it to friends who have old hardware without a hardware VP9 decoder .

The funny thing is that on an Intel c i5 8300H with Intel 2018 drivers, all versions of Chrome work normally with hardware video decoding on YouTube in VP9, on the gpu built into the processor. Both under W10Pro and LTSC 2019/2021.

That is, the fact that Chrome does not work correctly with built-in AMD 2019 gpus, even with the latest versions of drivers from 2023 and the latest version of Chrome under W10, is both AMD and Google's fault.

Intel also had its history of meanness - for example, earlier in 2010 they deliberately disabled the DVXA/DVXA2 decoder in XP (although everything was actually supported there), as a result, hardware acceleration did not work in XP even in players. And in W7 it worked disgustingly. Intel actually forced customers to upgrade to W7 when they didn't need to.

Since then, nothing much has changed, there are constantly secret agreements and meanness to refuse support for still current (and officially supported OS versions) in order to deliberately squeeze people into OS versions that are more profitable for them, now under the crooked and buggy W11 (which has up to there is still no stable build of the kernel, because they still have not released the LTSC version)

Quote from: vladk on December 29, 2023, 09:18:35Do we hit the Moore's barrier so that no improvement could be done ?
We are almost there; Moore's "law" has long since died. The performance growth curve per 1W has been flatter for several years now compared to what it was 10-15 years ago.

Yes it's a completely mess of legacy and redundant. Seems this time intel did it right with that soc for decoding and AI camera and 3rd party witv the right drivers and windows implementation and apis. Now third party software have to follow, that's why apple succeeded with m1. This time Intel tries its best, with their evangelist, API, drivers, marketing.
Still they have their legacy and their not so great etching Intel 4, which is still better than the Samsung one but not as good as the TSMC.

mixed bag

Quote from: A on December 31, 2023, 11:45:19
Quote from: mixed bag on December 31, 2023, 11:26:26apple Wallen garden ( especially on software)
It's about iOS (and it's a good thing for mobile device actually, keeps users protected and developers fed).
You can run anything on Mac, including Windows. Back in Intel days you could even use Windows as main OS.

Quote from: mixed bag on December 31, 2023, 11:26:26privacy ( they just hide things  better...)
Have no evidence = they just hide it better? Okay.

Didn't wrote that for  debate. Anyway it's been proven many times ago. Like just a single touch  on Apple's store send 100kb of private JSON data whatever the privacy settings are. It's just crazy. And  Apple simply hide, and they leak and use private data at least as much as others.

Also can't install easily any software on mac. I read about  for c.acked software, I agree, not the best case. it's a complicated and unreliable process of signature workaround. What I read is enough for me to not want to try their store at all. I tried iphone two times, and mac using a few times are work, it was  more than enough to be convinced myself it's the most Wallen garden ever existed and it's always been like that. If some like it, it's fine, just not my cup of tea, I need tweaking and some control of the underlying myself.

Quick Reply

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview