News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Post reply

The message has the following error or errors that must be corrected before continuing:
Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.
Other options
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview

Topic summary

Posted by Ishraqiyun
 - June 24, 2020, 23:06:40
I have a Razer Blade Advaned early-2019 (i7-8750H w/ RTX 2080 MQ with 64GB and 4TB NVMe upgrades) that was about $3000 with the upgrades I put in it. I dual boot Pop!_OS for work and Windows 10 for DAWs and gaming (WSL2 is unusable for me, otherwise I wouldn't dual boot). There are numerous things about Linux I don't like that I won't get into, so I always consider getting a Mac because I know it would be flawless for work and a DAW. Gaming of course, not so much. Its a Mac.

When they announced they were shipping with the 5600M, and it was going to a significant gain over the 5500M, I bought a MacBook Pro with the i9, 32GB RAM, 2TB, and the Pro 5600M upgrades last week just to see how it handled gaming, knowing that Apple has a great return policy so if I didn't like it I could return it no questions asked for a full refund. $4499.

Got my work environment setup. Works great. Got my DAW environment setup. Works great. Bootcamp Windows 10. Install 3DMark and run some Firestrike 1080p tests because that is the one I'm most familiar with scoring-wise.

I know I'm going to take a hit in gaming performance going from a RTX 2080 MQ to a Pro 5600M with HBM2. That is a given. I'm willing to make that sacrifice if my work and DAW environments don't have the headaches that Windows and Linux have. My Razer hits a 17,500 on average for FireStrike score. The 5600M is consistently hitting right around 14,100, which does put it around a RTX 2060 MQ.

I'm totally fine with that. That is a compromise I can live with. Again, it isn't going to compete with the RTX 2080 MQ I'm used to, but it will still be a decent gaming experience, with the benefit of a better work and DAW environment.

Install the few games I play regularly GTA V, FFXIV, and PUBG.

Just awful and the thing sounds like it is going to explode. Frame rates that are only maybe 35% of what I'm used to. The actual performance in game I would put around at under a GTX 1050 Ti MQ like my wife's HP Spectre x360 has (i7-8705G), which I used to game on before giving it to her. Putting on the lowest settings doesn't even seem to help improve the FPS, especially in PUBG. Significant thermal throttling at play, so probably not completely the fault of the 5600M. Its just a Mac.

Returned it. I'll deal with the headaches and annoyances of Linux and Windows. But, if I can't game at all on it, I definitely can't justify spending $4499 on it. I'd get a MacBook Pro 13 or Air if all I cared about was work and DAW environments, and I'm not carrying around two laptops. Thus, I'll just stick with the Linux and Windows dual boot, and hope someday Microsoft actually gets WSL2 working properly so I can just use one OS.
Posted by Valantar
 - June 24, 2020, 22:32:47
Quote from: _MT_ on June 24, 2020, 08:02:31
Quote from: Valantar on June 23, 2020, 19:16:22
I never said it was cheap, just that it's a dramatic reversal. After all, AMD used HBM2 on the Vega cards (including the MacBook versions), and never came close to competing with Nvidia on efficiency. Now they're suddenly miles ahead. The node advantage of course makes a difference, but RDNA is also showing just how efficient it can be here.
And I'm saying that it's probably no more efficient than other Navi 10 cards save for HBM2 (not sure how it compares to GDDR6) and perhaps binning. It's more efficient because it's slow (as in low frequency, not performance). And that's normal. Problem is that most people, I would say, would pick higher frequency with higher consumption and lower efficiency for the same cost silicon. It's like throttling. Except you're not led to believe you're going to get the full performance. As far as engineering, I don't see a revolution. I see Apple willing to pay for a relatively expensive piece of silicon - because of the memory and underutilization. This chip has 1 GHz base clock and practically no boost. The option was always there. And there is a reason you don't see it taken often. Who knows, maybe others will get inspired and follow Apple but I wouldn't hold my breath.

The only thing baffling me is the vanilla 5600M. It has a fairly low frequency and yet pretty high TDP. RX 5600M has 36 CUs clocked at 1000/ 1265 with 150 W TDP. Pro 5500M has 24 CUs clocked at 1000/ 1300 with 50 W TDP. What? Very similar frequencies, 50 % more CUs and three times the power? That TDP surely has to be wrong. Or that silicon must be garbage. It would make an interesting comparison with the 5700M for efficiency.
Where are you getting 150W for the RX 5600M from? NBC says configurable from 60W and upwards.

As for the rest: I know this. The point here is exactly that it proves that with a good implementation, AMD can now have a significant efficiency advantage. Vega cards were reasonably efficient if downclocked too, but still didn't even match the perf/W of the GTX 1000-series. It's obvious that this is due to Apple being willing to pay for a wide and slow GPU, and partly due to HBM, but it still goes to show that AMD's current perceived efficiency disadvantage (which isn't actually real; the 5600 XT matches any mid-range and up Nvidia GPU on efficiency, the 5700 is even better, it's just the 5700 XT that's pushed a bit too far) is mainly due to them pushing clocks higher than what is ideal for their chips. If AMD budgeted for 10% larger die area and 10% lower clocks, they would be beating Nvidia's performance at significantly lower power - but they're making some poor strategic moves, and insisting on pushing clocks of their GPUs too far. It maximizes the performance of the chip, especially in reviews at stock clocks, but it also maintains the image of AMD cards using too much power and having no OC headroom, which hurts them long term.

Which is why I like this, as it unequivocally demonstrates that if AMD made better implementation choices, they could beat Nvidia in efficiency.
Posted by _MT_
 - June 24, 2020, 08:02:31
Quote from: Valantar on June 23, 2020, 19:16:22
I never said it was cheap, just that it's a dramatic reversal. After all, AMD used HBM2 on the Vega cards (including the MacBook versions), and never came close to competing with Nvidia on efficiency. Now they're suddenly miles ahead. The node advantage of course makes a difference, but RDNA is also showing just how efficient it can be here.
And I'm saying that it's probably no more efficient than other Navi 10 cards save for HBM2 (not sure how it compares to GDDR6) and perhaps binning. It's more efficient because it's slow (as in low frequency, not performance). And that's normal. Problem is that most people, I would say, would pick higher frequency with higher consumption and lower efficiency for the same cost silicon. It's like throttling. Except you're not led to believe you're going to get the full performance. As far as engineering, I don't see a revolution. I see Apple willing to pay for a relatively expensive piece of silicon - because of the memory and underutilization. This chip has 1 GHz base clock and practically no boost. The option was always there. And there is a reason you don't see it taken often. Who knows, maybe others will get inspired and follow Apple but I wouldn't hold my breath.

The only thing baffling me is the vanilla 5600M. It has a fairly low frequency and yet pretty high TDP. RX 5600M has 36 CUs clocked at 1000/ 1265 with 150 W TDP. Pro 5500M has 24 CUs clocked at 1000/ 1300 with 50 W TDP. What? Very similar frequencies, 50 % more CUs and three times the power? That TDP surely has to be wrong. Or that silicon must be garbage. It would make an interesting comparison with the 5700M for efficiency.
Posted by Valantar
 - June 23, 2020, 19:16:22
Quote from: _MT_ on June 23, 2020, 15:14:07
Quote from: Valantar on June 23, 2020, 11:59:01
Got a link or two? Also, the point here isn't the value proposition (it never is with Apple), but rather the fact that AMD suddenly seems to have a significant perf/W advantage, which is a dramatic reversal of the state of affairs for the past decade. Realistically, this could mean similarly efficient GPUs in much more affordable laptops (though that would likely not mean HBM2 memory due to its price). It is definitely promising with regards to competition in the gaming/high performance laptop space in the future.
Problem is that it's not a cheap way of improving efficiency. It looks more like a 5700 XT with reduced frequency (= better efficiency; boost was cut almost in half) rather than 5600M. Apple probably had little choice as the thermal envelope was what it was. It's just a question of whether you can convince enough people to pay enough money.

They might have chosen HBM2 for packaging reasons to get the bandwidth they wanted. It has more than double the bandwidth compared to Pro 5500M (which is not the same as RX 5500M). Gaming probably wasn't the market. More like video editing where GPU acceleration can make a lot of difference. Given how good MacOS and FCP are at video editing, it might be a pretty good mobile editing rig. And to those people, worth the money. According to specifications, it should offer about 32 % more performance. For $600 (8 GB vs. 8 GB). If the difference is significantly larger, then my guess would be that the workload is bandwidth limited on the 5500M.
I never said it was cheap, just that it's a dramatic reversal. After all, AMD used HBM2 on the Vega cards (including the MacBook versions), and never came close to competing with Nvidia on efficiency. Now they're suddenly miles ahead. The node advantage of course makes a difference, but RDNA is also showing just how efficient it can be here.
Posted by _MT_
 - June 23, 2020, 15:14:07
Quote from: Valantar on June 23, 2020, 11:59:01
Got a link or two? Also, the point here isn't the value proposition (it never is with Apple), but rather the fact that AMD suddenly seems to have a significant perf/W advantage, which is a dramatic reversal of the state of affairs for the past decade. Realistically, this could mean similarly efficient GPUs in much more affordable laptops (though that would likely not mean HBM2 memory due to its price). It is definitely promising with regards to competition in the gaming/high performance laptop space in the future.
Problem is that it's not a cheap way of improving efficiency. It looks more like a 5700 XT with reduced frequency (= better efficiency; boost was cut almost in half) rather than 5600M. Apple probably had little choice as the thermal envelope was what it was. It's just a question of whether you can convince enough people to pay enough money.

They might have chosen HBM2 for packaging reasons to get the bandwidth they wanted. It has more than double the bandwidth compared to Pro 5500M (which is not the same as RX 5500M). Gaming probably wasn't the market. More like video editing where GPU acceleration can make a lot of difference. Given how good MacOS and FCP are at video editing, it might be a pretty good mobile editing rig. And to those people, worth the money. According to specifications, it should offer about 32 % more performance. For $600 (8 GB vs. 8 GB). If the difference is significantly larger, then my guess would be that the workload is bandwidth limited on the 5500M.
Posted by Valantar
 - June 23, 2020, 12:38:30
Quote from: Padmakara on June 23, 2020, 12:25:27
Quote from: RSS on June 23, 2020, 11:42:16
There are other credible channels showing the 5600m is slower than the 2060 maxq and on par with the 1660ti, sometimes little faster, we can say macbook pro 16 can be an entry level gaming laptop for 3500 bucks if you want...
This gpu is special designed for apple, it has 40cus not 36cus what you see in dell g5 se laptop. It is more powerfull, energy efficient and has hbm2 memory.
Yeah, that's true, if that is what the poster above was talking about. It's important not to confuse the previously released Radeon RX 5600M with the (for now, perhaps perpetually) Apple-only Radeon Pro 5600M. One has 36 CUs and GDDR6, the other has 40 CUs and HBM2. They are built from different silicon, and coupled with the different memory technology that means the performance of one can't be assumed to be equal to the performance of the other.
Posted by Alexander Sommer
 - June 23, 2020, 12:28:13
Who cares? Who will buy nowadays a intel-based mac knowing that 1-2 years ARM is dominant on mac?
Posted by Padmakara
 - June 23, 2020, 12:25:27
Quote from: RSS on June 23, 2020, 11:42:16
There are other credible channels showing the 5600m is slower than the 2060 maxq and on par with the 1660ti, sometimes little faster, we can say macbook pro 16 can be an entry level gaming laptop for 3500 bucks if you want...
This gpu is special designed for apple, it has 40cus not 36cus what you see in dell g5 se laptop. It is more powerfull, energy efficient and has hbm2 memory.
Posted by Valantar
 - June 23, 2020, 11:59:01
Quote from: Denniss on June 23, 2020, 10:00:21
That's BS. Actually, 5600m throttles in Bootcamp after 20-30min, so it's not worth to get. (actually all cards in the new macbook pro 16 throttles, no matter what). I yes, I do have the new macbook pro 16
Question: does it throttle due to actually consuming more power than its 50W budget up until then, or does it throttle due to the MBP's cooling system not being able to handle sustained CPU + 50W GPU loads? If it is the former, that makes these results less impressive, but if it is the latter, all that does is show that Apple's design fails to make the most out of this chip.

Quote from: RSS on June 23, 2020, 11:42:16
There are other credible channels showing the 5600m is slower than the 2060 maxq and on par with the 1660ti, sometimes little faster, we can say macbook pro 16 can be an entry level gaming laptop for 3500 bucks if you want...
Got a link or two? Also, the point here isn't the value proposition (it never is with Apple), but rather the fact that AMD suddenly seems to have a significant perf/W advantage, which is a dramatic reversal of the state of affairs for the past decade. Realistically, this could mean similarly efficient GPUs in much more affordable laptops (though that would likely not mean HBM2 memory due to its price). It is definitely promising with regards to competition in the gaming/high performance laptop space in the future.
Posted by RSS
 - June 23, 2020, 11:42:16
There are other credible channels showing the 5600m is slower than the 2060 maxq and on par with the 1660ti, sometimes little faster, we can say macbook pro 16 can be an entry level gaming laptop for 3500 bucks if you want...
Posted by Tov
 - June 23, 2020, 11:14:13
We need 4900HS+RDNA2 laptop.
Posted by Denniss
 - June 23, 2020, 10:00:21
That's BS. Actually, 5600m throttles in Bootcamp after 20-30min, so it's not worth to get. (actually all cards in the new macbook pro 16 throttles, no matter what). I yes, I do have the new macbook pro 16
Posted by Ariliquin
 - June 23, 2020, 09:50:31
If only they had paired this with am AMD Renoir CPU, that would have been awesome.
Posted by Redaktion
 - June 23, 2020, 09:18:16
The AMD Radeon Pro 5600M looks to be a good choice for MacBook Pro owners who like to game, according to benchmarks performed by Max Tech. The GPU yielded a performance similar to that of an RTX 2060 non-Max-Q laptop GPU in Fortnite and Call of Duty: Warzone, which makes it ideal for those who need power for both productivity and gaming even if it means spending an additional US$700.

https://www.notebookcheck.net/50-W-AMD-Radeon-Pro-5600M-in-the-MacBook-Pro-16-offers-gaming-performance-equivalent-to-that-of-an-80-W-RTX-2060-non-Max-Q-laptop-GPU.476935.0.html