News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Test Asus Vivobook S 14 OLED laptop - Inexpensive subnotebook is quieter and lasts longer thanks to Lunar Lake

Started by Redaktion, December 31, 2024, 04:29:30

Previous topic - Next topic

Balanced viewpoint

Both battery life and multicore performance are important. The thing is tho, on x86 side the battery life improvements are so incremental relative to gen-on-gen price increases that it makes it almost not worth it anymore. On the other side, zen 5 had severe latency issues. I know these were fixed on desktop but have they been for mobile strix too? Even if they are now, still doesn't change the fact that there's terrible pricing for them and availability is not great either.

Don't know why people are getting so defensive on the side the choose to die on a hill for. Everything sucks anyway. I would say, go Apple instead but even with them only light to medium load battery life is solid. As soon as you look at heavy load, it's 1-2 hours runtime at most.

Looking forward to Oryon v2 and nvidias AI PC's. Hopefully, they can make some significant improvements in these areas. We will see.

indy

What browser/version are you using to test WebXPRT/Kraken?  I get much higher results(304) from a Intel 155H from 2023 using Firefox 134.

I should be getting lower than your test results.

Antonioqr

I have this laptop and I connect my HDMI cable to my 24" 1080p LG monitor, but I don't receive any signal. I've tried testing the cable and the monitor with other laptops and they work; it's only with this laptop that it doesn't work. I've already updated the BIOS and the graphics drivers, and I've used the Windows+P options, but I still haven't found a solution. Has anyone had this problem or found a solution?

IA

What exactly is the difference between Asus models S5406SA and Q423SA? The specs in this article make them seem identical.

Lopital

Are the vivobooks with 8845hs still in production ? Or are already replaced ? I would like to buy 8845hs in vivobook but it is sold out everywhere. would it be available again ?

sharath

 Most in this discussion fail to understand MT performance difference. Yes AMD version is twice the performance in MT. BUT what application do people know that uses 24 threads? ALMOST NONE. so when most applications are using 4-8 threads both these AMD and Intel will perform identically (and in efficiency too). The only scenario where AMDs 24 threads becomes usefully is in specialized scenarios, like massive project compilation like Linux or running two CPU bound applications at 100%. But here's the question how many here know or have every been in that scenario? And the issue is not as simple as that either because ram/memory per thread also matter to see if all those threads can actually be used to boost performance
    Let me give another example I run massive data backtest, here keeping data in ram for each thread is twice as fast as loading from the fastest SSD. So a 8 thread run in memory can perform the same as 16 thread having to load from SSD in batched processing. So the wisdom is to really use high core count you need the ram to match it. these systems with only 32GB ram 24 thread CPU is wasted on these.
    But on the flip side additional core draw idle power, this lunar lake I have 258v, draws as little as 1.6 watts because of the display self refresh. That is over 40hrs on idle reading mode.
    So the CPU core count argument is really a trade off question. Intels approach to limited ram is simple better. At least for me who actually is the heaviest user, because for one 8 threads does not need additional coding to limit threads because I will need to balance the ram being used. everything is at its ideal numbers.
    And finally most heavy duty tasks like video encoding, graphics work, audio, gaming are all done by the GPU not the CPU anymore. and here Intel is simply better than AMD. which is why the core count argument seems even weaker for AMD.
    Not to mention the software support for intel is always going to be better. if you working on ML like pytorch or tensorflow AI applications. Intel has all the software sorted out to run it on any platform windows/linux/WSL. While AMD is always going to be a pain where you won't find support in windows or wsl.
     I donot want to keep dual boot just to run what I want to run. Intel simply works.
     These advantages are far too big for the core count difference on the CPU to matter.

RobertJasiek

On my desktop with dGPU and 8C/16T AMD CPU, running AI inferencing always uses all threads typically at 17% but infrequently at 100%. The AI would profit from the following in order:

1. faster / more dGPU(s) (if the CPU does not create a bottleneck)

2. higher single thread CPU speed

3. more CPU threads (of ordinary cores, do not know about the slowdown if some are E cores)

There is no limit to more / faster hardware being more useful because the neural net scales. Notebooks work like downsized desktops.

Toortle

Quote from: RobertJasiek on May 11, 2025, 18:02:27On my desktop with dGPU and 8C/16T AMD CPU, running AI...
Serious question Robert - do you ever do anything else except "running AI"? Because that's pretty much all you ever say in any of your comments around here. No hard feelings, just asking.

indy

99% of AI computing is in the cloud, and will continue to be for the forseeable future.  Adding it to any consumer device is marketing gimmick.  If you need it locally you are probably running local models and getting a notebook with it would be silly.  Cache (CPU) and memory limitations of most notebooks would prevent any serious work on a notebook versus dedicated workstation.

I'd like to see a local AI model make Veo 2 level 8-second videos available in less than 2 minutes on a notebook.  I'll be *very* impressed!

N1x

Quote from: sharath on May 11, 2025, 13:23:08gaming are all done by the GPU not the CPU anymore.

For the longest time this was the case. Unfortunately, current state of gaming trends are forcing people to resort to retro emulation as a response. The changing market dynamics are going towards a negative trajectory which is unsustainable.

iirc, rpcs3 can use many as many threads as you throw at it despite the PS3 only using 6-7, I've seen people saturate 24 threads on some of the heavier games when using high resolution and unlocked FPS settings. It's almost like video encoding / running blender or something. And this is an older emu now, not even looking at the future requirements more modern emulators will no doubt bring with them.

Quote from: Toortle on May 11, 2025, 18:39:20do you ever do anything else except "running AI"? Because that's pretty much all you ever say

I think it's closely related to his line of work so he kind of has to?

Quote from: indy on May 11, 2025, 18:53:14Adding it to any consumer device is marketing gimmick.  If you need it locally you are probably running local models and getting a notebook with it would be silly.

I do think eventually there will be something useful with AI but it's been largely way overhyped. Probably 2 decades from now people will look at this era like we look at those 90s Terminator 2 movies thinking the second coming of skynet is near due to internet and pentium revolution.

Also, every time AI actually does something somewhat novel, it usually takes like 200,000 GPUs to do it. One has to ask, spending billions of dollars and wasting all that energy, wouldn't it have just been better hiring an actual real person instead? It would support local communities and give people jobs in an already struggling world economy.

Whenever I see people saying AI is the solution to everything, it almost reminds me of those people trying to send themselves to mars and the colonize it. Wouldn't it just be cheaper to fix this planet instead you've that kind of money?

RobertJasiek

@Toortle, everything else except running AI and tablet usage I can do on my simple office mini PC with 7100U as well as on my AI desktop: book writing, PDF writing, Go diagram editing, Go playing against humans, Go teaching, text reading, media editing (simplest video cutting or basic image effects), media viewing, file management, browsing etc. Only for AI I need a dGPU in the fast PC.

RobertJasiek

@N1x, different AI applications require different speeds from NPU via 1 GPU to many GPUs, different amounts of RAM from very little to very much, VRAM as before. Many applications are still a dream, such as automatic Go book writing or mathematical Go theory creation (which I do myself) but some are already useful, such as Go next move suggestion stronger than all human players and therefore accelerating decision-making when better knowledge is not available yet.

Some consider AI text or image creation already useful. I am still sceptical. A million GPUs used for creating LLMs are interesting to watch but not everybody already finds LLMs useful.

We live in a transition phase. AI will become more and more useful. Not to replace doctors but to enhance their tools. We hope. Not a solution to everything - no ethics!

Sharath

Quote from: RobertJasiek on May 11, 2025, 18:02:27On my desktop with dGPU and 8C/16T AMD CPU, running AI inferencing always uses all threads typically at 17% but infrequently at 100%. The AI would profit from the following in order:

1. faster / more dGPU(s) (if the CPU does not create a bottleneck)

2. higher single thread CPU speed

3. more CPU threads (of ordinary cores, do not know about the slowdown if some are E cores)

There is no limit to more / faster hardware being more useful because the neural net scales. Notebooks work like downsized desktops.

Actually you got this all wrong. These igpus does not need to transfer data over pcie to the gpu like dgpu need to. The only reason you see high cpu usage is because of its need to move data.
    This lunar lake igpu cpu is almost idle when running dl or ai work.and it's ml performance is around 30% of nvidia l4 gpu you can rent on collab. And this can do all the half precision too to boost the performance 3x. Not to mention it all works on windows linux or wsl.
    So are these slower than dgpu for ai. Yes. But you will be surprised that it is not by the margin you are thinking.
    My ai work load on fp32 precision takes 24 seconds on collab nvidia t4 gpu while on this it takes 44 secs. But turn on half precision this does it on 15secs.

RobertJasiek

Sharath,

1) when running AI, I do not use the iGPU at all. I might also use it for a small increase in speed but prefer to keep the CPU fans rather quiet.

2) You make the false assumption that every AI would have similar load or bandwidth behaviours.

3) For the AI I use, bandwidths and VRAM size are almost immaterial because each object is small.

4) My previous statements about the order of priority (faster dGPU(s), then single thread speed, then more threads) is based on my monitoring during the previous two years when running my AI. Most of the time, single thread speed and more threads are also unimportant; these two aspects only become important when the AI hits exception handling for object repetitions reached by different data paths. Usually, such repetitions do not occur and mostly the dGPU speed matters; this has also been confirmed by many other users of Go AI.

5) Floating point is not a bottleneck for Go AI because objects comprise integers and probabilities can be represented by a few decimal points so like integers, too. The complexity of Go does not lie in the objects themselves but is in the decision-making among them.

Quick Reply

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview