News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Post reply

The message has the following error or errors that must be corrected before continuing:
Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.
Other options
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview

Topic summary

Posted by vertigo
 - October 08, 2020, 16:50:20
Quote from: _MT_ on October 08, 2020, 11:28:32
Quote from: vertigo on October 07, 2020, 23:15:25
Well, true, you shouldn't entirely blame Intel, as a lot of the blame lies with the OEMs, but Intel could, and should, apply more leverage on the OEMs to ensure they're using their chips properly...
Yes, it reflects badly on Intel. So, they might want to do something about it. However, do they really perceive it as a problem? Yes, we complain. But, does the broader target audience complain about higher boost not being sustainable and do they demand high frequencies. Even though they're not a good idea in a mobile device and they put Intel's platform at a bigger disadvantage compared to ARM? I'm not convinced. I think that even bigger problem is desktop motherboard manufacturers screwing with power limiting and tricking CPUs into drawing way more than they should. Which ends up looking bad for Intel in reviews. They haven't really stepped on their necks yet. And restricting access and giving them less freedom might have negative impact on enthusiasts. So, how much do we really want them to protect their reputation? In this case, what they can do? Refuse selling them chips? Intel does offer a lot of engineering support. Much more than AMD. They put a lot of money into laptop development. I don't think this is their target (sustainable higher frequencies). As I said, it's not a good strategy in mobile devices. You can always do more. Intel could do more. But as I wrote, do they really care about his in particular? Remember what was the intention of turbo boost. What Intel desired. If anything, I can more easily imagine Intel being unhappy about companies that push boost too far and end up making really hot and/ or noisy ultrabooks.

The big difference between base and boost is that base has to be sustainable. It has a significantly harder guarantee. While boost allows you to go into wonderland. It's like automatic factory overclocking, squeezing the silicon for almost all it's worth. Old processors couldn't do anything like this and their top frequencies were really base frequencies. Also, base is used for sizing cooling. That's why TDP is at base frequency. They probably don't want to make their mainline desktop processors 125 W TDP instead of 65. Because it would have impacts on manufacturing. And who knows whether the silicon can even take it at the more extreme end, what kind of longevity you're looking at. For laptops, the argument is even simpler. Desktop frequencies and performance come at desktop power. Laptops have to observe the same laws of physics. And sustained desktop-like power draw is just not feasible in a typical laptop. We have been reducing volume for many years. Nor desirable.

The problem isn't the higher switching rate. It's a linear member. Hypothetically, the power required for a given task should remain the same (in reality, you have to deal with things like memory latency that can increase the number of wasted cycles). The processor doesn't save energy this way. But it does increase the required power and therefore cooling. The only way you can save energy (beyond what I will write about in a moment) is from ancillaries like display that draw regardless of frequency. The problem is the voltage. Higher frequencies require higher voltages for stable operation. And required power raises with square of voltage (so, 10 % more voltage is 21 % more power; 20 => 44; 30 => 69). Each transistor switch becomes 21, 44, 69, ... percent more expensive. That's the efficiency killer. That's why mobile chips are low voltage or ultra low voltage and part of the reason why they have such low base frequencies (higher frequencies are not sustainable at those voltages). Desktop frequencies require desktop voltages which means a big drop in efficiency for a ULV processor. Yes, you can't go too slow either because there is leakage current. Transistors leak even though they're switched off. So, it's better to alternate running at an efficient frequency and being off rather than constantly running at a low frequency. AFAIK, on modern nodes, leakage is a serious consideration (it really restricts how low it makes sense to go). But just because it's a bad idea to run at 500 MHz (it would be better to run at 1.5 GHz for 1/3 of the time and switch off for 2/3) doesn't mean that it's a good idea to run at 3 GHz.

It's well known that many consumers are clueless. it's a free world. You're free to make stupid decisions. Or educate yourself. For me, boost means that I don't need to necessarily buy a high end model with the same number of cores for performance. My default would be to buy the cheaper version and only buy the more expensive one if I can find evidence that the laptop can utilize it (especially when undervolting was an option). What the higher model is really about is higher bin. That's what you're paying for. Those processors should be more efficient (they should require less voltage for any given frequency) which means they should sustain higher frequencies in the same chassis. That's also why it has higher base and boost frequencies. It's a higher quality piece of silicon. It's up to me whether I'm willing to pay the premium or not. And with typical consumer focused manufacturers, you might not really have a choice because of limited configurations available. Flip side is that I can return it if I don't like it unlike a made to order configuration from the likes of Dell or Lenovo (I'm not getting the same consumer protections because of the nature of custom work). There, I really want to know what I'm getting. Annoyingly, the high powered components are also the expensive components. You're at the highest risk when you want to splurge for something really nice.

Yeah, I know all about exponential increase of power usage combined with diminishing returns of increased frequency when overclocking, as I've overclocked a couple of my computers over the years. Aside from getting lazy about it, CPUs becoming so powerful it wasn't really needed as much, and doing less encoding, which was the main thing I needed a lot of CPU power for, that's also why I quit bothering with it after doing it a couple times. It just wasn't worth it to spend hours tweaking, only to have it run a lot hotter and run the risk of instability, all for what ended up being decent, but by no means huge, gains in performance. Though I always thought it was more related to running the chip faster than it was designed for, not that it was inherently true and that there was a hard limit to it, which I've since learned as chips struggle to pass 5GHz even today.

I've also been saying for years that I'd prefer that mobile CPUs would focus more on efficiency than just getting faster. In fact, I remember years ago being upset about the opposite, that for a while they were more concerned about making desktop CPUs more efficient, and their performance was improving quite slowly as a result, though I can certainly understand that, as even they have thermal limitations, not to mention most people would prefer not to spend half their electric bill powering their computer. But I'd be much happier if a new generation kept performance the same but cut power use down by 25-30% than increasing performance, which we have plenty of at this point, but did little for battery life.

And that's why boost/base, base/throttling, whatever, doesn't really matter, what matters is the overall picture of performance vs battery life. But then, boost and throttling do matter, because, generally speaking, a laptop that is able to boost longer has better, more efficient cooling, which means it will likely need to run the fans less, which means better battery life, since fans use a lot of power. And computers that actually throttle, which happens sometimes, are going to be very bad, both because of the lousy cooling, but the crappy performance relative to others, not to mention I don't care to support a company that feels it's ok to put out junk like that.

Unfortunately, just knowing the CPU does nothing to tell the consumer what the actual performance and battery life are, and therein lies the problem. And that's why Intel should tighten things up, like they're doing with Evo (which I have doubts about, but that's a separate issue), and they have the power to do so, but as you and I have both said, they just don't care. So different OEMs take the same chip, and one makes a good performing laptop with good battery life and another makes one with lousy performance and battery life, but all the general consumer knows is it has an i3/5/7, and now that means even less than it already did, and that is definitely on Intel.

And I'm the same way as you, in that I generally look at the mid-tier (i5/R5) for laptops, since it's plenty of power, typically not much less than the 7's, has potentially better battery life due to less cores, lower frequency, and a lower TDP, and costs less. Especially when the cooling can't even keep up and you end up with performance similar to or even worse than the lower tier chip, which really pisses me off that OEMs think it's ok to charge people a premium for such lousy performance. Which is why it's so frustrating that so many OEMs require you to go with the top-end for >8GB RAM. So it's not always up to the consumer, except to avoid the product entirely, which I've done multiple times.
Posted by Hifihedgehog
 - October 08, 2020, 15:53:17
Quote from: Mark S. on October 05, 2020, 12:55:15
Quote from: Veyron on October 05, 2020, 11:33:52
QuoteWhile Intel's Tiger Lake lineup has certainly lived up to its performance promises

Lost me there.
The first graph contradicts the first sentence of the article :)

Agreed. Tiger Lake shows about half the all-core performance as Renoir at the same or more power draw. This article is quite littered with misinformation, including the part where the author states that Ryzen 7 4700U is the same chip as the Ryzen 9 4900U. It is true that they may be all binned from the same die, but the truth is SMT is disabled on the 4700U meaning 8 threads compared to the 4900's 16. Besides, if we want to play so fast and loose with the definition of processors having the same chip, then by that token a Ryzen 3 4300U has the same chip as the 4900U though they are worlds apart in performance given that the 4300U has just 4 cores and 4 threads compared to the 4900's 8 cores and 16 threads! Therefore, it only makes sense to say processors have the same chip when processors have the same thread and core counts and the same microarchitecture and they differ only by clock speed.
Posted by Hifihedgehog
 - October 08, 2020, 15:39:31
 "Ryzen 7 4700U, Ryzen 7 4800U, and Ryzen 9 4900U are all the same chip."

No, they are not. The 4700U has no SMT, or is 8 cores and 8 threads. Meanwhile, the 4800U and 4900U are the same chip with SMT left enabled, having 8 cores and 16 threads. I understand the point of the article, but this is incorrect.
Posted by _MT_
 - October 08, 2020, 11:28:32
Quote from: vertigo on October 07, 2020, 23:15:25
Well, true, you shouldn't entirely blame Intel, as a lot of the blame lies with the OEMs, but Intel could, and should, apply more leverage on the OEMs to ensure they're using their chips properly...
Yes, it reflects badly on Intel. So, they might want to do something about it. However, do they really perceive it as a problem? Yes, we complain. But, does the broader target audience complain about higher boost not being sustainable and do they demand high frequencies. Even though they're not a good idea in a mobile device and they put Intel's platform at a bigger disadvantage compared to ARM? I'm not convinced. I think that even bigger problem is desktop motherboard manufacturers screwing with power limiting and tricking CPUs into drawing way more than they should. Which ends up looking bad for Intel in reviews. They haven't really stepped on their necks yet. And restricting access and giving them less freedom might have negative impact on enthusiasts. So, how much do we really want them to protect their reputation? In this case, what they can do? Refuse selling them chips? Intel does offer a lot of engineering support. Much more than AMD. They put a lot of money into laptop development. I don't think this is their target (sustainable higher frequencies). As I said, it's not a good strategy in mobile devices. You can always do more. Intel could do more. But as I wrote, do they really care about his in particular? Remember what was the intention of turbo boost. What Intel desired. If anything, I can more easily imagine Intel being unhappy about companies that push boost too far and end up making really hot and/ or noisy ultrabooks.

The big difference between base and boost is that base has to be sustainable. It has a significantly harder guarantee. While boost allows you to go into wonderland. It's like automatic factory overclocking, squeezing the silicon for almost all it's worth. Old processors couldn't do anything like this and their top frequencies were really base frequencies. Also, base is used for sizing cooling. That's why TDP is at base frequency. They probably don't want to make their mainline desktop processors 125 W TDP instead of 65. Because it would have impacts on manufacturing. And who knows whether the silicon can even take it at the more extreme end, what kind of longevity you're looking at. For laptops, the argument is even simpler. Desktop frequencies and performance come at desktop power. Laptops have to observe the same laws of physics. And sustained desktop-like power draw is just not feasible in a typical laptop. We have been reducing volume for many years. Nor desirable.

The problem isn't the higher switching rate. It's a linear member. Hypothetically, the power required for a given task should remain the same (in reality, you have to deal with things like memory latency that can increase the number of wasted cycles). The processor doesn't save energy this way. But it does increase the required power and therefore cooling. The only way you can save energy (beyond what I will write about in a moment) is from ancillaries like display that draw regardless of frequency. The problem is the voltage. Higher frequencies require higher voltages for stable operation. And required power raises with square of voltage (so, 10 % more voltage is 21 % more power; 20 => 44; 30 => 69). Each transistor switch becomes 21, 44, 69, ... percent more expensive. That's the efficiency killer. That's why mobile chips are low voltage or ultra low voltage and part of the reason why they have such low base frequencies (higher frequencies are not sustainable at those voltages). Desktop frequencies require desktop voltages which means a big drop in efficiency for a ULV processor. Yes, you can't go too slow either because there is leakage current. Transistors leak even though they're switched off. So, it's better to alternate running at an efficient frequency and being off rather than constantly running at a low frequency. AFAIK, on modern nodes, leakage is a serious consideration (it really restricts how low it makes sense to go). But just because it's a bad idea to run at 500 MHz (it would be better to run at 1.5 GHz for 1/3 of the time and switch off for 2/3) doesn't mean that it's a good idea to run at 3 GHz.

It's well known that many consumers are clueless. it's a free world. You're free to make stupid decisions. Or educate yourself. For me, boost means that I don't need to necessarily buy a high end model with the same number of cores for performance. My default would be to buy the cheaper version and only buy the more expensive one if I can find evidence that the laptop can utilize it (especially when undervolting was an option). What the higher model is really about is higher bin. That's what you're paying for. Those processors should be more efficient (they should require less voltage for any given frequency) which means they should sustain higher frequencies in the same chassis. That's also why it has higher base and boost frequencies. It's a higher quality piece of silicon. It's up to me whether I'm willing to pay the premium or not. And with typical consumer focused manufacturers, you might not really have a choice because of limited configurations available. Flip side is that I can return it if I don't like it unlike a made to order configuration from the likes of Dell or Lenovo (I'm not getting the same consumer protections because of the nature of custom work). There, I really want to know what I'm getting. Annoyingly, the high powered components are also the expensive components. You're at the highest risk when you want to splurge for something really nice.
Posted by vertigo
 - October 07, 2020, 23:15:25
Well, true, you shouldn't entirely blame Intel, as a lot of the blame lies with the OEMs, but Intel could, and should, apply more leverage on the OEMs to ensure they're using their chips properly. Essentially, like they're starting to do with Evo, but extended, or separately, to ensure they meet certain minimum performance requirements, e.g. not having them perform like some laptops NBR has reviewed where they actually do throttle the chip.

And it does make perfect sense to consider boost as the base, as that's exactly how CPUs have worked for years with Speedstep and Speed Shift. They have a max frequency, which is what they're rated at, and they can, and do, frequently change frequency (so lots of switching is going on, and yet battery is saved because even if the switching uses a hair more, energy is saved by running in the lower power states) to lower speeds. This is why I say base/boost sounds more like a marketing team at work, because it's simply the inverse situation, where instead of running at, e.g., 4GHz all the time and stepping down to 3GHz/2GHz/1GHz, it runs at say 3.8 and boosts to 4.5 and steps down to 3/2/1. So in this example, it can go faster, but only very briefly, otherwise runs at a similar or even lower speed, and still steps down.

My point was that no matter how you look at it, it's jumping all over the place, and either way it's spending most of the time at lower frequencies to conserve battery when idling or under a low workload and when stressed it ramps up, whether to its normal/rated speed or, briefly, to a boost speed, then back down to its normal/rated speed. Either way, it's doing essentially the same thing, but calling it a boost does two things: it makes it sound better, and it allows the use of a higher number because it doesn't have to be able to sustain that, even for a minimum amount of time. Most consumers are going to prefer a 4/4.5 base/boost CPU than a 4 or even 4.2GHz chip, even though in many cases the former wouldn't be able to sustain the boost long enough to make much difference. In either case, when worked they're both going to be roughly the same, with the former being slightly faster for a few seconds initially (and yes, I realize this can sometimes make a difference, but it's almost always going to be very minor), and when the work is done they're going to throttle back. Only the former is going to scale up (using more power), do some work for a few seconds before getting hot and scaling back down (using more power) and running at (roughly) the same speed as the latter.

And as far as I'm concerned, this is no different than calling it a 4.5GHz CPU that throttles down to 4GHz due to heat, only as far as marketing goes that would go terribly, since people would be upset that their 4.5GHz CPU almost never actually runs that fast. But functionally speaking, it's the same thing. When it comes down to it, it really doesn't matter which way you look at it, it matters what speeds, and for how long, the CPU is capable of, and how fast that allows it to perform a certain workload, and how much this chews through the battery. And both Intel/AMD and the OEMs play a role in this.

That's why I personally would never buy a laptop solely based on the CPU, but I look at how fast it performs certain tasks (particularly H.264 encoding, as that's a good, consistent real-world benchmark, though frankly they've gotten so powerful I don't even really look much at performance anymore) and how the battery life is, which is why I like NBC reviews, since they have the best system I've seen for that. So I can just look at the system as a whole, with the CPU and its implementation, i.e. cooling/drivers/OEM throttling/etc, and see how it performs and how long it lasts. The problem is that I can at least look at certain systems (i5/i7/R5/R7, AMD > Intel & Ice Lake > Comet Lake) and ignore others (i3/R3), but with Intel muddying the waters (even more), the concern is even doing that will become problematic, as there will essentially be steps within each that will be hidden, instead of i5/15 and i5/25, it will be i5 and i5, with no discernible difference without doing some digging, which most people can't or won't do. Of course, as I said, you can still just look at reviews to see how the performance and battery life are and compare based on that, which is, IMO, the best way to do it anyway, but that doesn't make it any less bad how they're making a mess of it all.
Posted by _MT_
 - October 07, 2020, 20:14:39
Quote from: vertigo on October 06, 2020, 19:49:09
All true, but this statement isn't entirely fair. Yes, boost isn't meant to run 100% of the time, otherwise it would just be the base frequency...
Yes, as I said, I also want to see what a laptop can do. The thing is, what the author is complaining about was intentional. And it's a good thing. Boost does really two things. It takes advantage of thermal capacity (it takes time for a "cold" cooler to heat up) and of any cooling headroom. So, if you happen to have a better cooler, it makes use of it. Another way of looking at it is that a manufacturer is rewarded for putting in a better cooling solution. The fact that they often skimp on cooling is more reflective of what they think customers want. Yes, it means that not so good designs won't perform as well. And we want to know who does a good job and who doesn't. But we shouldn't blame Intel for this.

No, the boost is really boost. Mobile CPUs have historically had relatively low frequencies. Frequency has a big impact on power consumption. It's a double whammy: one, more switching is going on (each switch takes energy), and two, you need higher voltage for stable operation (which wrecks efficiency). As a rule of thumb, as you go above 2 GHz, you start loosing efficiency. And it's not linear. It should start getting pretty bad above 3 GHz. This is a big generalization. It's really not a good idea to run a mobile chip above 3 GHz. Same is true for servers where efficiency can also matter a lot.

And, if you were to declare turbo a base, suddenly, you don't have a 45 W processor. Right. It's a 65 W processor or 80 W processor or whatever. And you end up draining the battery in an hour at base frequency. For a mobile device, energy efficiency is a top priority. It makes no sense to consider the boost as the base. Unlike binning where the highest bin is the top quality product and we can debate what is a product and what a byproduct (are you getting a discount or are you paying extra - it doesn't really matter). It's a nature of manufacturing that you're going to see a distribution in properties and you need to deal with it somehow. And selling several grades is one option. Another is scrapping the lower grades. Another is not monetizing the higher grades and allowing people to play the game of chance with some really nice prizes.
Posted by vertigo
 - October 06, 2020, 19:49:09
Quote from: _MT_ on October 06, 2020, 15:52:20
You see, you're part of the problem. You're not testing at base frequencies, are you. I get it. I also want to see what a laptop can do. But then you shouldn't complain about things being messy when you contribute to the mess.

All true, but this statement isn't entirely fair. Yes, boost isn't meant to run 100% of the time, otherwise it would just be the base frequency. However, the better the cooling is (OEM), and the better the efficiency of the CPU (manufacturer), the longer it will be able to sustain the boost and the more work it will be able to do within a certain amount of time, therefore making more efficient use of the boost (i.e. needing less boost time to finish a job, thereby allowing it to throttle back sooner). So boost testing should absolutely be done and be weighed accordingly, because even though it is just a boost, it contributes fairly significantly to overall performance, as well as giving an indication of just how good or bad of a job the OEM did on designing the laptop. It certainly wouldn't make sense to disable boost for testing, as that's not real-world and not how they're designed to work.

Similar to your explanation of binning, another way to look at this is not base with turbo, but as the turbo being the base and the base being a throttle, which is exactly what it is (and it wouldn't surprise me if the base/turbo terminology came about by marketing deciding that sounded better than throttling). So instead of looking at it as a chip cranking up temporarily, you can look at it as throttling down a lot due to not being able to sustain its speed constantly or for power savings and reduced heat production when not needed.

Sustained performance, whether 98% of the time at base clock and only 2% boost or with a better design that achieves 10% boost time, is important. All else being equal, I'd take the latter laptop, since it's going to be faster and cooler, and the fans will likely run less due to its superior cooling (and in the flipped scenario, it's able to maintain its speed longer before throttling). Of course, if all things appear equal but aren't ("same" CPU based on SKU but actually different), then it's more difficult to make sense of it all. Granted, not overly so, and simply reading some reviews would allow a consumer to know just what they're getting, but this article is dealing with the confusion it will cause in the retail world, among uneducated consumers.
Posted by _MT_
 - October 06, 2020, 15:52:20
What on Earth are you talking about? Configurable TDP has been around for ages. For example, Kaby Lake had it. 7600U was 15 W as standard and could be configured down to 7.5 W or up to 25 W.

Binning has been around for even longer. When they offer three processors with the same core count, same TDP, just different clocks, you can bet they're different bins of the same silicon. The higher bin is actually what allows them to push higher clocks at the same voltage. Another way of looking at it is that the cheaper versions are defective and don't meet the cut for the top of the line model. Rather than throwing them away, they sell them for less as a lower model. Lower yields and diminishing returns in performance (= few people willing to pay) are the reasons why are such models so expensive.

The problem isn't configurable TDP. The problem lies in throttling. In the past long gone, if you failed to cool a processor, it would go up in flames. It would get destroyed. Which is inconvenient and expensive. So came thermal protection. Now, if you failed to cool your processor (perhaps you neglected maintenance for years and a very warm summer came), your computer would simply shut off to protect itself from destruction. But you still lost any unsaved work. Which sucks. In the meantime, processors learned to tweak their frequency as a power saving measure. Which led to thermal throttling. Now, when you fail to cool your processor, it will lower its frequency in an attempt to arrest rise in temperature. And this is what allows manufacturers to screw up cooling. The laptop continues to work and everything appears fine on the surface. They wouldn't be able to do this if the laptops were shutting down or catching fire. As long as processors can throttle, manufacturers can play it loose with cooling.

Also, performance testing these days is all about boost clocks rather than base clocks. Boost clocks are not necessarily meant to be sustainable. They're meant to take advantage of free cooling capacity. It takes time for a cooler to heat up. So, for a short period (or a long period in the case of water coolers), you can crank it up. And the reality is that many consumer loads are bursty in nature. So, you get a meaningful speed up. It's a smart use of (cooling) resources (it's not efficient which is why a manufacturer might want to curtail it; they can also prefer quieter operation). You just shouldn't forget what it is. It's a boost. That's why TDP is specified for base frequency.

You see, you're part of the problem. You're not testing at base frequencies, are you. I get it. I also want to see what a laptop can do. But then you shouldn't complain about things being messy when you contribute to the mess.
Posted by DGG
 - October 05, 2020, 19:51:15
Yeah,  it seems the new boost in base clock is a trick, based on 28w TDP.  At 15 W still 1800Mhz.  I guess Intel is fighting for it's life and the marketing department have been working overtime.  Anandtech did an article on the new Tiger lake, and at 15w base frequency is the same as Ice lake -  superfin seems to be a hoax.  Xe looks good at 28w, at 15w no better than AMD.......   only so much you can do at 10nm and 4 cores.  I am sure Intel fans will eat this up though.   I have 4700u and can rest easy that the 7nm is ahead of its time.
Posted by vertigo
 - October 05, 2020, 19:38:59
Quote from: Theo on October 05, 2020, 19:28:13
but really hoping EVO will fix them both: only laptops with higher tdp cpu's, excellent cooling and power delivery should have that branding.

That seems unlikely, since Evo is reserved for laptops with "good" (really just above average) battery life and fast-recharging, which means it will probably end up being primarily attached to lower-power CPUs. I also have my doubts that it will actually accomplish much at all, especially since they seem to have set the bar fairly low, and it seems to me more like something they created to be able to throw another sticker on laptops to make them look cooler and more advanced, just another thing that AMD systems don't have.

And EVO, by the way, is for Samsung's SSDs, which I only mention because ever since I saw Intel was using Evo for their new certification, I thought it was pretty ridiculous for them to use the same nomenclature. At best, it's a bit of a rip-off and unimaginative; at worst, it may cause confusion among people that are familiar with Samsung drives but not Intel's usage.
Posted by Theo
 - October 05, 2020, 19:28:13
Not fixed at maximum! It just puts an arbitrary limit - you can overclock or undervolt etc at a properly designed - power usage and cooling - laptop.
The tdp / up3 - up4 should be available on cpu model name. BUT ALSO, cooling power should be listed at laptop specs too!
I doubt that will happen, but really hoping EVO will fix them both: only laptops with higher tdp cpu's, excellent cooling and power delivery should have that branding.
Posted by vertigo
 - October 05, 2020, 19:21:36
Quote from: Veyron on October 05, 2020, 11:33:52
QuoteWhile Intel's Tiger Lake lineup has certainly lived up to its performance promises

Lost me there.

Haha. My thoughts exactly. So far Tiger Lake hasn't been very impressive, and NBR just posted an article shortly before this one discussing the fact that despite its performance in benchmarks, its real-life performance is severely lacking.

Quote from: Anonymous on October 05, 2020, 11:46:01
The fact is, most casual customers won't bother about it, or too ignorance about it. They just come in, see what's the latest, and checkout. Never compare or bothers to read lengthy review. Are they regretting after buying it? Perhaps, but most likely won't do anything about it, the other most might just resell it, and get a new one - or switching to something more comfortable that most of their colleague knows, or buying an Apple.

Yup. Sadly, most people do little to no research before a purchase like this. Hell, many, if not most, people don't even do much research before buying a car or even a house. And companies know and rely on that.

Quote from: Mark S. on October 05, 2020, 12:55:15
Quote from: Veyron on October 05, 2020, 11:33:52
QuoteWhile Intel's Tiger Lake lineup has certainly lived up to its performance promises

Lost me there.
The first graph contradicts the first sentence of the article :)


To be fair, the reference designs in the graph use a different CPU than the OEM laptops, so it doesn't necessarily contradict the first sentence (other articles and benchmarks do that), but it does beg the questions why a graph was made comparing one CPU in reference computers to another CPU in OEM computers and why whoever made the graph thought it was a good idea to use the same color for three different lines.

Quote from: Spunjji on October 05, 2020, 15:41:43
I'm fairly glad to finally see this overdue article, but I have some criticisms / things I wish you'd mentioned:

1) You really could have hammered home how Intel's preview device doesn't represent performance in real devices. That's an epic bait-and-switch that they pulled on the public, and by getting your team (among others) to review it, they made you complicit.

2) You didn't mention anything about the performance differences between Intel's iGPU in different devices - or how it mostly fails to catch up with Renoir's Vega 8 in actual games.

3) You do a bit of "both sides" equivocation which isn't entirely warranted. Not only are AMD's model names pretty straightforward, there's no way to buy a "Vega 8" or "Vega 7" outside the context of the chips they're attached to - and nobody's going to have much trouble figuring out that the 4000 series are faster than the 3000 series. It's also nonsense to condemn their products for being based on the same chip when that's true for all of Intel's products, too.

4) Maybe a shout-out for Nvidia somewhere? They're literally the worst when it comes to this kind of behaviour.

All that aside, it's good to see this problem getting more attention. It's absurd that AMD, Nvidia and Intel are allowing OEMs to get away with selling high-rated products at performance levels beneath that of their own cheaper products.

1) Yup. They even posted an article all excited about the performance when benchmarks were first released, completely ignoring that fact, then, after I pointed that out (probably not because of it, but a review site shouldn't be beaten to the punch about something so important by a reader) they posted another article mentioning this issue.

2) This is especially odd since they very recently published an article discussing this, so they are clearly aware of it, yet choose to ignore it and continue proclaiming TL as a great step forward.

3) To be fair, AMD's naming scheme for mobile is pretty screwed up, too, being out of step with desktop and therefore causing confusion.

And it must be kept in mind that, as shown in your first point, it's not just the manufacturers, but the review sites who are complicit in misleading consumers. Review sites are supposed to be, you know, reviewing products and providing non-biased, objective, and analytical critiques to aid consumers in making purchase decisions, and that is not what they're doing when they act like NBR and others have been regarding TL. I do very much appreciate articles like this and the one mentioned before where they explained why the reference design benchmarks are pretty much meaningless, and wish they would do more articles calling out the industry on these ridiculous things they do. And I wish they would consider things more carefully so they aren't going back and forth all the time (TL is amazing > well, not necessarily > TL is amazing > well, actually not > TL is amazing...).
Posted by xpclient
 - October 05, 2020, 16:06:39
What I am worried more about is cooling. Some laptop OEMs don't really seem to understand the importance of proper cooling and they used to put 45W CPUs in thin laptops, capped at 35W but with just a single fan.

Now this problem may happen a lot more as due to variable TDP, many OEMs will be lazy about redesigning their chassis but set the Tiger Lake CPU at 28W TDP. I suspect a lot more laptops are going to run hot and throttled.
Posted by Spunjji
 - October 05, 2020, 15:41:43
I'm fairly glad to finally see this overdue article, but I have some criticisms / things I wish you'd mentioned:

1) You really could have hammered home how Intel's preview device doesn't represent performance in real devices. That's an epic bait-and-switch that they pulled on the public, and by getting your team (among others) to review it, they made you complicit.

2) You didn't mention anything about the performance differences between Intel's iGPU in different devices - or how it mostly fails to catch up with Renoir's Vega 8 in actual games.

3) You do a bit of "both sides" equivocation which isn't entirely warranted. Not only are AMD's model names pretty straightforward, there's no way to buy a "Vega 8" or "Vega 7" outside the context of the chips they're attached to - and nobody's going to have much trouble figuring out that the 4000 series are faster than the 3000 series. It's also nonsense to condemn their products for being based on the same chip when that's true for all of Intel's products, too.

4) Maybe a shout-out for Nvidia somewhere? They're literally the worst when it comes to this kind of behaviour.

All that aside, it's good to see this problem getting more attention. It's absurd that AMD, Nvidia and Intel are allowing OEMs to get away with selling high-rated products at performance levels beneath that of their own cheaper products.
Posted by gx
 - October 05, 2020, 14:43:42
QuoteAt the higher end, the Renoir Ryzen 7 4700U, Ryzen 7 4800U, and Ryzen 9 4900U are all the exact same chip, just binned differently: would consumers willingly buy the Ryzen 9 4900H knowing it was just a higher-clocked variant of the 4700H?

Man, nice article. Please correct those part numbers, they are already confusing as it is, don't add to the confusion.

The 4900U and the 4700H aren't actual parts.

Plus the 4700U is 8C8T, while the 4800U is 8C16T so not the same chip binned differently.

The article should probably read.

QuoteAt the higher end, the Renoir Ryzen 7 4800U, Ryzen 7 4800H(S), and Ryzen 9 4900H(S) are all the exact same chip, just binned differently: would consumers willingly buy the Ryzen 9 4900H(S) knowing it was just a higher-clocked variant of the 4800H(S)?