Register
Notebookcheck
, , , , , ,
search relation.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
 

Author Topic: Comparing the A14 Bionic to a 15-inch MacBook Pro's 6 core CPU is meaningless, and here's why  (Read 2302 times)

Redaktion

  • Editor
  • High End NB
  • ****
  • Posts: 85218
  • Karma: +46/-6
The move to 5 nm for the A14 Bionic could offer the next iPhones a huge step up in performance compared to their predecessors and their Android competitors. However, there is little comparability between it and a 15-inch MacBook Pro, as we shall discuss below.

https://www.notebookcheck.net/Comparing-the-A14-Bionic-to-a-15-inch-MacBook-Pro-s-6-core-CPU-is-meaningless-and-here-s-why.450436.0.html

S.Yu

  • High End NB
  • *****
  • Posts: 2371
  • Karma: +6/-0
In terms of absolute CPU performance, Apple really is surpassing most competing models in x86, even working with the clock speed restrictions of the latest processes. In terms of GPU performance, well among the integrated stuff they're also very competitive. Sustained performance simply relies on heat dissipation so while A14 could reach 10W+ in bursts that's not even enough to sustain base clocks in any decent laptop CPU, so if one uses laptop cooling on A14, this becomes a non-issue.
The only remaining issue they need to crack before truly adopting ARM for laptops is compatibility. You can see that the Adobe suite currently performs well on the latest iPP, it would only perform better on A14.

CmdrEvil

  • Guest
All this  is simply showing that Intel and their archaic 14nm architecture is absolutely ancient. Compare apples cpus to something like ryzen 3950x, that can achieve high performance and maintain it indefinitely and you will see a different picture. AMD is only starting, wait til you see properly optimised renoir chips and next years 5nm tech. Moores law will once again start going.

Hiktaka

  • Guest
It's not that I'm going to argue against this type of article (it's useless),
but people dissing Geekbench as 'incomparable between iOS and Windows' 'only tests very specific scenarios' yada yada.. simply doesn't get it that Apple A series chips are really faster than x86.

Helium007

  • Guest
Agreed, these articles makes me insane!

One point is also not mentioned here or somewhere else - TDP or Thermals in generic. I do work a lot with simple ARM microcontrollers (only several of MHz) and I know that most of them throttle their maximmum speed if they dont have proper cooling (read big heatsink). Typical smartphone can dissipate about 2 Watts of energy and if load is consistent, phone will get really hot or will throttle to maintain acceptable temperature (second is more common).

So while this new Apple SoC might deliver same peak performance (and I really doubt it), you will have to add at least 10x10 cm Copper heatsink to  get this performance for more than few microseconds. This is the case why phone/tabelt SoC will newer reach performance of notebook with good heatsink with fan...

LOL

  • Guest
We'll reach sub-1nm before the end of the decade. Can't wait!  ;)

Combined with solid state batteries, I predict the battery life of phones and wearables will reach legendary 30-day standby times of Nokia 2G candybar phones of the early 2000s.

Apple (TSMC) is always one step ahead of whatever Qualcomm puts out, so for the Android phones, 5nm Snapdragons should be in 2021-22.

S.Yu

  • High End NB
  • *****
  • Posts: 2371
  • Karma: +6/-0
Typical smartphone can dissipate about 2 Watts of energy
Without going into any physics that sounds too low by experience, 5-8W is more like it, even by NBC's sustained load numbers alone. It also scales with the temperature difference.

Solandri

  • Guest
With the 3000 mAh battery found in typical phones, @ 3.7 Volts, a power draw of 2 Watts would completely deplete it in 5.5 hours.  5 Watts would deplete it in 2.2 hours.  8 Watts would deplete it in 1.4 hours.  So I'd say the 2 Watt figure is pretty accurate.

The nm figure used by different manufacturers are not comparable to each other.  They are quite literally measuring different things when they deem a process to be x nm.
  • Intel's 14nm process yields 37.5M transistors per mm^2
  • TSMC's 10nm process yields 52.5M transistors/mm^2.
  • Intel's 10nm process yields 101M transistors/mm^2.
  • TSMC's 7nm process yields 114M transistors/mm^2
  • Samsung's 5nm process is 126.5M transistors/mm^2 (although this is memory, so somewhat more compact).
  • TSMC's 5nm process is projected to yield 171M transistors/mm^2
So Intel's 10nm process is actually more comparable to TSMC's 7nm, than to TSMC's 10nm.

The problem with comparing Geekbench scores across different CPU platforms is that several of the things which comprise the benchmark have hardware assist on some CPUs, not on others.  This results in substantially higher scores on the CPUs with hardware acceleration.  But the reason general purpose CPUs are missing hardware assist for these functions is because the task is extremely rare (like AES encryption/decryption), and the CPU is very fast.  So it's not worth devoting the silicon to it, because in the rare instances where it's needed the CPU can complete it in a reasonable amount of time.  But in something like a SoC, the general purpose part of the CPU is so slow that AES would completely cripple performance.  So they have to add hardware AES encryption/decryption to make the SoC usable in the rare instances where it will be needed.

The benchmark does not weight these according to frequency of need as is done in the CPU design process.  It weights these rarely-used functions the same as commonly-used functions, resulting in the hardware acceleration in SoCs exaggerating their benchmark score compared to general purpose CPUs.  Whereas in real-world tasks, the general purpose CPUs dominate.

Helium007

  • Guest
Typical smartphone can dissipate about 2 Watts of energy
Without going into any physics that sounds too low by experience, 5-8W is more like it, even by NBC's sustained load numbers alone. It also scales with the temperature difference.

Well I wasnt pessimistic.  You can check it here (one of few online heatsink calculators for free):
celsiainc.com/resources/calculators/heat-sink-size-calculator/
For 5W thermal dissipation, heatsink of aluminium at 25C with size 15x8x2mm with fins(!) it will be about 50% of required size/area. And I allowed 95C of max case temperature which is usual max for commercial parts.

Also important is that phones use their midframe as heatsink that actually is not a good heatsink material - typical AZ91D magnesium alloy has less than 50% thermal conductivity than tradition heatsink aluminium used in laptop coolers.
Phones do not have any airflow that makes any heatsinking the worst case scenario. Another thing is that x86 CPUs have  direct thermal contact from silicon to heatsink so its more effective. Phones SoC have standart plastic epoxy cases, not optimized for heat sinking at all. This will cause the fact that even the chip actually can dissipate e.g. 5W, the transfer resistance of package will never allow it to dissipate (fast enough) for longer time than few miliseconds, because die will heat up over thermal limit.

All this combined will never allow to make any chip in mobile/tablet to compete their desktop counterparts. Maybe silicon in them yes, but not without doing special "desktop/ laptop" type package and absolutely different active heatsink solution.

Daironhorse

  • Guest
We'll reach sub-1nm before the end of the decade. Can't wait!  ;)

Combined with solid state batteries, I predict the battery life of phones and wearables will reach legendary 30-day standby times of Nokia 2G candybar phones of the early 2000s.

Apple (TSMC) is always one step ahead of whatever Qualcomm puts out, so for the Android phones, 5nm Snapdragons should be in 2021-22.
It's not like apple just happens to make their chips at the time of year when tsmc is done with their new node and then Qualcomm uses it when they make their next chip. If anything Qualcomm would get priority over apple because their flagship chips sell more than apple's, not including all their other mid and low end chips that apple doesnt compete with.

S.Yu

  • High End NB
  • *****
  • Posts: 2371
  • Karma: +6/-0
With the 3000 mAh battery found in typical phones, @ 3.7 Volts, a power draw of 2 Watts would completely deplete it in 5.5 hours.  5 Watts would deplete it in 2.2 hours.  8 Watts would deplete it in 1.4 hours.  So I'd say the 2 Watt figure is pretty accurate.
No, instead it would make the 5W figure fairly accurate, for example Mate30P, NBC's numbers, 4500mAh, 3.65hrs load, that's about a sustained load of 4.56W. Only sustained load measures the heat dissipation between room temperature and the temperature allowed for the SoC.

S.Yu

  • High End NB
  • *****
  • Posts: 2371
  • Karma: +6/-0
Typical smartphone can dissipate about 2 Watts of energy
Without going into any physics that sounds too low by experience, 5-8W is more like it, even by NBC's sustained load numbers alone. It also scales with the temperature difference.

Well I wasnt pessimistic.  You can check it here (one of few online heatsink calculators for free):
celsiainc.com/resources/calculators/heat-sink-size-calculator/
For 5W thermal dissipation, heatsink of aluminium at 25C with size 15x8x2mm with fins(!) it will be about 50% of required size/area. And I allowed 95C of max case temperature which is usual max for commercial parts.

Also important is that phones use their midframe as heatsink that actually is not a good heatsink material - typical AZ91D magnesium alloy has less than 50% thermal conductivity than tradition heatsink aluminium used in laptop coolers.
Phones do not have any airflow that makes any heatsinking the worst case scenario. Another thing is that x86 CPUs have  direct thermal contact from silicon to heatsink so its more effective. Phones SoC have standart plastic epoxy cases, not optimized for heat sinking at all. This will cause the fact that even the chip actually can dissipate e.g. 5W, the transfer resistance of package will never allow it to dissipate (fast enough) for longer time than few miliseconds, because die will heat up over thermal limit.

All this combined will never allow to make any chip in mobile/tablet to compete their desktop counterparts. Maybe silicon in them yes, but not without doing special "desktop/ laptop" type package and absolutely different active heatsink solution.
There's really no need to dive into the theoretics since phones are regularly tested for sustained load at NBC, and they're far higher than 2W. Also most phones these recent years have heat pipes, far more efficient than aluminum fins.

Spunjji

  • Guest
It's not that I'm going to argue against this type of article (it's useless),
but people dissing Geekbench as 'incomparable between iOS and Windows' 'only tests very specific scenarios' yada yada.. simply doesn't get it that Apple A series chips are really faster than x86.

Not that there's any point in arguing against somebody making a statement that's plainly factually untrue, but no, they're not "really faster than x86". If they were then they'd already be in MacBooks.  ::)

Helium007

  • Guest
With the 3000 mAh battery found in typical phones, @ 3.7 Volts, a power draw of 2 Watts would completely deplete it in 5.5 hours.  5 Watts would deplete it in 2.2 hours.  8 Watts would deplete it in 1.4 hours.  So I'd say the 2 Watt figure is pretty accurate.
No, instead it would make the 5W figure fairly accurate, for example Mate30P, NBC's numbers, 4500mAh, 3.65hrs load, that's about a sustained load of 4.56W. Only sustained load measures the heat dissipation between room temperature and the temperature allowed for the SoC.

Well how do they measure that? Thats not SoC LOAD but total POWER input I guess. This is probably total power input and even if it would be total power dissipation (which you cannot measure easily), it would be dissipation of ALL components.  For example RAM and PMIC chips are dissipating quite big amount of power. Only way how to get some sort of approximate SoC dissipation values could be cutting all SoC power lines and connecting them to bench supply...

So I am still standing with walues about 2 or 2,5W peak.

Sorry but my job is embedded electronics design and manufacturing, so I really would like to hear some reasonable infomation about this

ChrisGX

  • Guest
Yes, indeed, comparisons of smartphone SoCs and desktop CPUs are of limited value but the benchmarks (when gathered in the correct way - under the right conditions using defined procedures) are not therefore unreliable or deceptive.

You can actually use benchmark scores gained from mobile device processors to make reasonable conjectures about the likely performance of a desktop processor composed of similar cores. It is to be expected that any processor put inside a Mac will not be an A14 but rather a variant of the A14, with support for desktop capable IO added, and with rather similar synthetic benchmark performance to the A14 (or more likely the A14X) and rather higher performance than the A14/A14X in sustained benchmark scores and real world benchmarking. Yes, it is a working assumption that Apple will correctly tailor a variant of the A14 for the desktop environment that it is meant for. As it happens, that assumption is very reasonable.

Admittedly, making projections based on the rate of improvement in performance in the past is a speculative process that does not offer us anything more than educated guesses about the performance of unreleased processors. We won't know what the performance of A14 Desktop processors is like until we start seeing examples of them in products. It wouldn't surprise me if Apple coaxes Ice Lake U series like performance out of its new chips. Lifting GPU performance without causing unwanted increases in the TDP rating will probably be the most challenging piece of the puzzle.

 

 
C 2018 » Impressum     Sprachen: Deutsch | English | Español | Français | Italiano | Nederlands | Polski | Português | Русский | Türkçe | Svenska