News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Post reply

The message has the following error or errors that must be corrected before continuing:
Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.
Other options
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview

Topic summary

Posted by A
 - February 27, 2020, 22:28:31
@Scott - It isn't irrelevant since we are comparing an internal gpu of a mid range processor vs a dedicated gpu of last gen. And it beats it by a solid 20-50%!

And there is still the 4800u and 4900u which are even faster.
Posted by Scott
 - February 27, 2020, 17:03:14
The comparison between the Vega 7 iGPU and the MX250 is irrelevant due to the sole face that the MX250 is based on older technology
Posted by CrumpledSteelShin
 - February 26, 2020, 23:07:36
I just picked up an Asus Tuf "Gaming" laptop for $499 with a Ryzen 3550H(Zen+ on 12nm/integrated Vega graphics) that came with a discrete mobile GPU( RX560X with 4GB of GDDR5 VRAM ) so that's a bit more performance on that for such a low dollar deal.

So at least I can make use of Blender 3D 2.8/Later and use Blender's Cycles rendering on the Vega Integrated Graphic and the RX560X's Polaris discrete mobile GPU(Used mostly for Blender 3D Cycles rendering).

But that Ryzen 7 4700U hopefully will have some  4000H series brethren and 35+ Watt TDPs because I'm not ever interested in thermal throttling on a laptop. But in order to get an H series AMD APU based laptop that mostly requires getting a gaming laptop even if the laptop is not used that much for gaming.

I'm more interested in DDR5 memory support and PCIe 4.0 support on laptops where there is no external MB chipset  anyways and the laptop is using the APU's/CPU's SOC/MCM provided PCIe lanes. I'm wanting USB 4 support that Includes a TB3 protocol/controller as well in the USB-IF's USB 4 standard that also covers all the USB standards backwards compatibility and USB-Type-C electrical standard support also.

But Laptops with their already limited PCIe lane count motherboards need PCIe 4.0 more than desktops and that's just to drive the TB3 connectivity or that USB 3.2 Gen 2(10Gbs) and Gen 2x2(20Gbs) bandwidth. I really want my next laptop to support external TB3(USB 4) GPU enclosures and beefier external GPUs for Blender 3D rendering or even light gaming workloads.

Even the Open Source software packages are starting to require newer hardware and I'm having to retire my older Sandy Bridge/older Intel core  i series laptops for any latest generation graphics software usage. Blender 3D's Cycles rendering really requires GCN 2nd generation and later GPUs and I'm not sure about Nvidia's offerings but that legacy GPU hardware is mostly made that way for lack of any new driver/graphics API feature level support on that legacy hardware and even Intel's Ivy Bridge integrated graphics may not work reliably under Blender 3D 2.8/later editions.

I'd also like to know more about Ryzen 4000's actual DX12/Vulkan, and even OpenGL/OpenCL, feature level support even if MS(DX12 and DXR) and Khronos(Vulkan) have to provide a Non dedicated hardware/software code Paths(Shader Kernel code) support solution for any DXR/Vulkan-Ray-Tracing acceleration via some API software code path extensions.

I know that Nvidia has added Vulkan API extensions for Ray Tracing acceleration on Pascal/eariler GPUs' for a shader cores based Ray Tracing acceleration functionality and that's probably the same for DX12/DXR as well managed by MS. But I'm really not needing "Real Time" ray tracing ability under Blender 3D as that's mostly just Ray Tracing calculated via software(OpenCL or CUDA-Non-RTX) based non dedicated hardware/software based code paths.  AMD/MS/Khoronos is doing the same as far as DX12/DXR and Vulkan API extensions for the non ray tracing in hardware based GPUs that AMD currently makes and I'm waiting to read about RDNA2 and some hardware ray tracing on AMD's Navi2/whatever offerings.
Posted by william blake
 - February 26, 2020, 20:39:30
Quote from: DF on February 26, 2020, 19:58:52
I'll reiterate.  Nvidia delaying Ampere mobile until the end of the year or even just 6 months (rumors vary) may well be handing a golden opportunity to AMD in the mobile space.  Just as AMD is doing in the high margin server space to Intel, this may well be the same to Nvidia as Intel missing the 10nm targets that would have given them better products to compete against AMD.  I do think this is good - the market will benefit from competitors being effective again.
amd cpus are superior to intel cpus. longer lineups, much faster at the top, better efficiency.
nvidia gpus are superior to amd gpus. longer lineups, much faster at the top, better efficiency.
totally opposite scenarios.
Posted by DF
 - February 26, 2020, 19:58:52
I'll reiterate.  Nvidia delaying Ampere mobile until the end of the year or even just 6 months (rumors vary) may well be handing a golden opportunity to AMD in the mobile space.  Just as AMD is doing in the high margin server space to Intel, this may well be the same to Nvidia as Intel missing the 10nm targets that would have given them better products to compete against AMD.  I do think this is good - the market will benefit from competitors being effective again.
Posted by william blake
 - February 26, 2020, 18:56:25
how about consumption during test run? 1w cpu/29w gpu or something?
Posted by Name
 - February 26, 2020, 17:20:58
Now loop 3D Mark for one hour and see the results after one hour. At 15 W this results only possible with short bursts.
Posted by Aastra
 - February 26, 2020, 16:54:07
This is clear proof that AMD can make efficient powerful graphics card. But still they refuse to do it in desktop or laptop graphics card. They limit to consoles and APUs. It seems they don't consider consumer graphics as top priority.
Posted by Andre668
 - February 26, 2020, 16:15:32
Quote from: Vladislav on February 26, 2020, 14:12:23
4700u is not quad-core.
🙃

yes but 8 treats.. 4800u is "real" 8 core with 16treat, also 8CU GPU, this is promising
Posted by A
 - February 26, 2020, 16:09:34
Now one can imagine the 4900u
Posted by onechannelbiach
 - February 26, 2020, 16:05:41
Real world results may be affected by RAM.
Posted by Vladislav
 - February 26, 2020, 14:12:23
4700u is not quad-core.
🙃
Posted by deksman2
 - February 26, 2020, 13:27:07
Quote from: Theo on February 26, 2020, 12:30:08
You mean "Thanks to its octa-core..." , right?

Depends on how much of an influence the CPU part has on the GPU.
Usually, the graphics portion of 3dMark tends to stress mostly the gpu part not so much the CPU... though at say 1080p, the CPU would have more of an impact than on say 2k.

But one has to keep in mind that this is a setup that goes up against Intel's mobile version with its own iGP and a dGPU from NV.

The only way to know for sure would be to also test the MX250 running with 4700U, but since not, we can only go by this data-set (for now) and independent testing once the hw is out in the open.
Posted by Theo
 - February 26, 2020, 12:30:08
You mean "Thanks to its octa-core..." , right?
Posted by Redaktion
 - February 26, 2020, 12:10:17
3DMark 11 scores were recently leaked for the Ryzen 7 4700U. The 15W mobile part posted 5446 on the graphics test, substantially higher than Nvidia's discrete GeForce MX250. It also posted a physics score on par with a desktop system equipped with the 65W i7-6700.

https://www.notebookcheck.net/AMD-Ryzen-7-4700U-scores-leak-on-3DMark-15W-Renoir-with-Vega-7-iGPU-wrecks-25W-Nvidia-Geforce-MX350.454693.0.html