Quote from: william blake on February 01, 2020, 00:07:12
thanks a lot but i am not a spec user. my ipc consists of games, browsers and some video editing.
"SPEC user" - there's no such thing, lol. SPEC is an industry standard benchmark suite consisting of sample workloads from a wide range of real-world applications. Maybe look into what you're commenting on before dismissing it?
Quote from: S.Yu on February 01, 2020, 09:05:24
Quote from: Valantar on January 31, 2020, 23:30:48
Comet Lake is still Skylake, right? If so, Zen 2 is ahead on IPC, not slightly behind. Ahead by about 6.5% according to Anandtech's testing in SPEC2017.
IIRC they squeezed a few more percentages each year...or every couple year or so since Skylake, bottom line the IPC hasn't been at a total standstill since Skylake.
Actually it has. Beyond hardware mitigations replacing software fixes for security breaches, there have been
zero relevant architectural changes from Skylake to Coffee Lake, and IPC is identical, with the only performance increases coming from clock speed increases. The testing I referred to used a 9900K, btw, so the 6% advantage is up-to-date.
Quote from: william blake on February 02, 2020, 02:22:34
Quote from: S.Yu on February 01, 2020, 21:12:55
GB is not specific enough about what it measures
is this https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf not enough?
and why it is worse than this https://www.spec.org/cpu2017/Docs/overview.html ?
anyway, iirc, spec(average from many tests) is pretty close to cinebench(single test) in terms of different architectures ipc comparison, but, ips measured in games is different to spec and cinebench.
and here is my main problem with using spec results as a reference of ipc. for a none-gamer its fine. but we should use something like (spec+games)/2, for and average pc user, it should provide a more accurate picture.
Talking about IPC in a gaming context is sadly almost impossible (or at least irrelevant), as adding a GPU inherently introduces too many uncontrollable variables to be able to identify something that can reliably be pointed out as CPU IPC. Different CPU architectures can treat various parts of the GPU driver differently, loading data differently, etc. - and this will likely even vary across GPU vendors too. Drivers will also have different levels of optimization for different architectures. So for any type of normalized test you'd need not only a repeatable workload (which can be done) and a selection of CPUs to test at a common clock speed, but also to test with a normalized GPU at a fixed performance level - but that becomes problematic as performance parity across CPU vendors with GPUs from different vendors can't be guaranteed. I.e. you'd end up with at least four classes: AMD GPU and AMD CPU, AMD GPU and Intel CPU, Nvidia GPU and AMD CPU, and Nvidia GPU and Intel CPU. Each would in all likelihood give different results. Simplifying this into a number you can call "IPC" becomes impossible, as external uncontrollable factors like GPU drivers and their optimizations for specific CPU architectures would inherently skew the numbers, invalidating the benchmark - you'd no longer be testing CPU IPC, but GPU driver optimization instead. This is demonstrated rather beautifully by Intel having lower CPU IPC, but still winning slightly in gaming performance thanks to a combination of higher clock speeds and better optimizations for their architecture (not to meniton, of course that most games have relatively few high-performance threads, which somewhat nullifies the advantage of having more fast cores vs. fewer slightly faster cores).
Talking about overall gaming performance across architectures on the other hand is of course possible, as it isn't dependent on normalizing anything beyond the workload, and simply asks which hardware configuration performs the best in closer to real-world scenarios. This is where Intel currently has the upper hand even in the desktop segment, though it remains to be seen if this also holds true to the more power limited mobile segment given Zen2's superior efficiency and clock scaling at low power.