Quote from: _MT_ on June 24, 2020, 08:02:31Where are you getting 150W for the RX 5600M from? NBC says configurable from 60W and upwards.Quote from: Valantar on June 23, 2020, 19:16:22And I'm saying that it's probably no more efficient than other Navi 10 cards save for HBM2 (not sure how it compares to GDDR6) and perhaps binning. It's more efficient because it's slow (as in low frequency, not performance). And that's normal. Problem is that most people, I would say, would pick higher frequency with higher consumption and lower efficiency for the same cost silicon. It's like throttling. Except you're not led to believe you're going to get the full performance. As far as engineering, I don't see a revolution. I see Apple willing to pay for a relatively expensive piece of silicon - because of the memory and underutilization. This chip has 1 GHz base clock and practically no boost. The option was always there. And there is a reason you don't see it taken often. Who knows, maybe others will get inspired and follow Apple but I wouldn't hold my breath.
I never said it was cheap, just that it's a dramatic reversal. After all, AMD used HBM2 on the Vega cards (including the MacBook versions), and never came close to competing with Nvidia on efficiency. Now they're suddenly miles ahead. The node advantage of course makes a difference, but RDNA is also showing just how efficient it can be here.
The only thing baffling me is the vanilla 5600M. It has a fairly low frequency and yet pretty high TDP. RX 5600M has 36 CUs clocked at 1000/ 1265 with 150 W TDP. Pro 5500M has 24 CUs clocked at 1000/ 1300 with 50 W TDP. What? Very similar frequencies, 50 % more CUs and three times the power? That TDP surely has to be wrong. Or that silicon must be garbage. It would make an interesting comparison with the 5700M for efficiency.
Quote from: Valantar on June 23, 2020, 19:16:22And I'm saying that it's probably no more efficient than other Navi 10 cards save for HBM2 (not sure how it compares to GDDR6) and perhaps binning. It's more efficient because it's slow (as in low frequency, not performance). And that's normal. Problem is that most people, I would say, would pick higher frequency with higher consumption and lower efficiency for the same cost silicon. It's like throttling. Except you're not led to believe you're going to get the full performance. As far as engineering, I don't see a revolution. I see Apple willing to pay for a relatively expensive piece of silicon - because of the memory and underutilization. This chip has 1 GHz base clock and practically no boost. The option was always there. And there is a reason you don't see it taken often. Who knows, maybe others will get inspired and follow Apple but I wouldn't hold my breath.
I never said it was cheap, just that it's a dramatic reversal. After all, AMD used HBM2 on the Vega cards (including the MacBook versions), and never came close to competing with Nvidia on efficiency. Now they're suddenly miles ahead. The node advantage of course makes a difference, but RDNA is also showing just how efficient it can be here.
Quote from: _MT_ on June 23, 2020, 15:14:07I never said it was cheap, just that it's a dramatic reversal. After all, AMD used HBM2 on the Vega cards (including the MacBook versions), and never came close to competing with Nvidia on efficiency. Now they're suddenly miles ahead. The node advantage of course makes a difference, but RDNA is also showing just how efficient it can be here.Quote from: Valantar on June 23, 2020, 11:59:01Problem is that it's not a cheap way of improving efficiency. It looks more like a 5700 XT with reduced frequency (= better efficiency; boost was cut almost in half) rather than 5600M. Apple probably had little choice as the thermal envelope was what it was. It's just a question of whether you can convince enough people to pay enough money.
Got a link or two? Also, the point here isn't the value proposition (it never is with Apple), but rather the fact that AMD suddenly seems to have a significant perf/W advantage, which is a dramatic reversal of the state of affairs for the past decade. Realistically, this could mean similarly efficient GPUs in much more affordable laptops (though that would likely not mean HBM2 memory due to its price). It is definitely promising with regards to competition in the gaming/high performance laptop space in the future.
They might have chosen HBM2 for packaging reasons to get the bandwidth they wanted. It has more than double the bandwidth compared to Pro 5500M (which is not the same as RX 5500M). Gaming probably wasn't the market. More like video editing where GPU acceleration can make a lot of difference. Given how good MacOS and FCP are at video editing, it might be a pretty good mobile editing rig. And to those people, worth the money. According to specifications, it should offer about 32 % more performance. For $600 (8 GB vs. 8 GB). If the difference is significantly larger, then my guess would be that the workload is bandwidth limited on the 5500M.
Quote from: Valantar on June 23, 2020, 11:59:01Problem is that it's not a cheap way of improving efficiency. It looks more like a 5700 XT with reduced frequency (= better efficiency; boost was cut almost in half) rather than 5600M. Apple probably had little choice as the thermal envelope was what it was. It's just a question of whether you can convince enough people to pay enough money.
Got a link or two? Also, the point here isn't the value proposition (it never is with Apple), but rather the fact that AMD suddenly seems to have a significant perf/W advantage, which is a dramatic reversal of the state of affairs for the past decade. Realistically, this could mean similarly efficient GPUs in much more affordable laptops (though that would likely not mean HBM2 memory due to its price). It is definitely promising with regards to competition in the gaming/high performance laptop space in the future.
Quote from: Padmakara on June 23, 2020, 12:25:27Yeah, that's true, if that is what the poster above was talking about. It's important not to confuse the previously released Radeon RX 5600M with the (for now, perhaps perpetually) Apple-only Radeon Pro 5600M. One has 36 CUs and GDDR6, the other has 40 CUs and HBM2. They are built from different silicon, and coupled with the different memory technology that means the performance of one can't be assumed to be equal to the performance of the other.Quote from: RSS on June 23, 2020, 11:42:16This gpu is special designed for apple, it has 40cus not 36cus what you see in dell g5 se laptop. It is more powerfull, energy efficient and has hbm2 memory.
There are other credible channels showing the 5600m is slower than the 2060 maxq and on par with the 1660ti, sometimes little faster, we can say macbook pro 16 can be an entry level gaming laptop for 3500 bucks if you want...
Quote from: RSS on June 23, 2020, 11:42:16This gpu is special designed for apple, it has 40cus not 36cus what you see in dell g5 se laptop. It is more powerfull, energy efficient and has hbm2 memory.
There are other credible channels showing the 5600m is slower than the 2060 maxq and on par with the 1660ti, sometimes little faster, we can say macbook pro 16 can be an entry level gaming laptop for 3500 bucks if you want...
Quote from: Denniss on June 23, 2020, 10:00:21Question: does it throttle due to actually consuming more power than its 50W budget up until then, or does it throttle due to the MBP's cooling system not being able to handle sustained CPU + 50W GPU loads? If it is the former, that makes these results less impressive, but if it is the latter, all that does is show that Apple's design fails to make the most out of this chip.
That's BS. Actually, 5600m throttles in Bootcamp after 20-30min, so it's not worth to get. (actually all cards in the new macbook pro 16 throttles, no matter what). I yes, I do have the new macbook pro 16
Quote from: RSS on June 23, 2020, 11:42:16Got a link or two? Also, the point here isn't the value proposition (it never is with Apple), but rather the fact that AMD suddenly seems to have a significant perf/W advantage, which is a dramatic reversal of the state of affairs for the past decade. Realistically, this could mean similarly efficient GPUs in much more affordable laptops (though that would likely not mean HBM2 memory due to its price). It is definitely promising with regards to competition in the gaming/high performance laptop space in the future.
There are other credible channels showing the 5600m is slower than the 2060 maxq and on par with the 1660ti, sometimes little faster, we can say macbook pro 16 can be an entry level gaming laptop for 3500 bucks if you want...