News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Lenovo IdeaPad Y580-20994BU Laptop Review

Started by Redaktion, July 29, 2012, 09:45:48

Previous topic - Next topic

Redaktion

A Good Idea? The $1000 you'll put down for a Y580 will net you high-end performance from some of the top-end Ivy Bridge and Kepler processors. Is this Lenovo best-seller a must-have for gamers and multimedia users on-the-go?

http://www.notebookcheck.net/Lenovo-IdeaPad-Y580-20994BU-Laptop-Review.78974.0.html

Viktor Ko

This computer definitely shows what manufacturers can fit into a 15 inch, sub 3kg chassis.

It is heavier than the MSI GE60 (2.6kg + GTX 660M option), but the Lenovo has it beat when it comes to build quality.

Matty

I've been waiting for a good in-depth review of this model.  It's available for £999 here in the UK with a 64GB mSATA drive and all the other trimmings which is not a bad deal for a high spec semi-portable gaming machine.  It's a shame the display is so glossy, as it appears to have pretty good visual quality.  It also seems to be running much hotter (and in the wrong places this time) than the Y570 - another step backwards.  These are important issues for me and so that crosses this machine off my list now.

DOH!

Ok, this is the 2nd article I've read from you guys in 2 days that has inaccuracies when it comes to describing the graphics cards.  In the first paragraph of the Gaming Performance section of your article you say: "The 660M has faster core and shader clock rates than the latter two [670M, 675M], but Nvidia has scaled the performance of the 660M accordingly largely by halving the width of the memory bus: 128-bit versus 256-bit of the 675M."  This quote is just wrong, the 660M has higher core clock rates but LOWER shader clock rates, this is why the performance of the 660M is hampered in comparison to the previous generation 675M as both have the same number of cores.  Basically, assuming core clock speeds are the same, then 1 Fermi core = 2 Kepler cores.  This is why the Kepler 660M is slower than the Fermi 675M, and NOT due to the difference in the bus bandwidth between the two.

Come on, if you're going to write tech articles on PC's/laptops, then at least have a current understanding of graphics card technologies.  After all, all the information about these graphics cards is already listed in your Graphics Cards section of this website, that's how I learnt about it.

Allen.Ngo

It should have said memory clock, not shader clock!
Thanks for the catch.


Hræsvelgr

Quote from: dude on July 29, 2012, 18:34:51
8192 MB, DDR3 SDRAM 800 MHz??????? WHY

SDRAM  thats why  ;)
Need less Power -> longer Battery Runtime
Acer TravelMate 5720 | T9300 | HD2600 | 3GB DDR2 600mhz | 320GB HDD | BD-Player
Bj 04/2008
läuft wie am ersten Tag, ergonomische Tastatur - Dank an Acer!

Schenker XMG P722 | i7-3820QM | GTX680M | 16GB DDR3 1600mhz | 256GB SSD | 1TB HDD | BD-Player
Bj 10/2012
Schön verpackte Endlospower

nissangtr786

his quote is just wrong, the 660M has higher core clock rates but LOWER shader clock rates, this is why the performance of the 660M is hampered in comparison to the previous generation 675M as both have the same number of cores.  Basically, assuming core clock speeds are the same, then 1 Fermi core = 2 Kepler cores.
.........................................
DOH you do realise if the 660m was a 256bit card with same clocks it would embarrass a 675m. A 680m takes less power consumption to run then a 675m and performs 80% better.

Unfortunately nvidia rebranding hampered mid range cards. Kepler in essence has 2x as much shaders as 1 fermi but the power consumption saving means its great and kepler is on 28nm as well.

I reckon nvidia revision with 760m should have 256bit prerform similar to the current 680m and the 780m to be just over 2x as much faster then the 580m/675m junk card and the 70m to take around 30w less power consumption.

nissangtr786

In essence really we should oc the 680m by 10% and then compare it a 670m and oc the 680m by 50% and then compare it to the 675m as the 680m consumes less then a 670m by 12w electricity wise.

One more thing look at this 680m info:

http://www.geforce.co.uk/whats-new/articles/introducing-the-geforce-gtx-680m-mobile-gpu/

As you can see in the table below, the GTX 680M eclipses the performance of the GTX 580M by such a margin thanks to its advanced chip design that increases the number of CUDA Cores by 3.5x, memory bandwidth by 19.2GB/s, memory speed by 300MHz, and available memory by 500MB. Most importantly, the GTX 680M achieves such performance even with a GPU clock speed that's 520MHz lower than its predecessor. Lower clock speed means lower power consumption. That the GTX 680M is able to outperform its predecessor while consuming less power is perhaps its greatest merit.

DOH!

Quote from: nissangtr786 on July 30, 2012, 01:01:45
DOH you do realise if the 660m was a 256bit card with same clocks it would embarrass a 675m.

I think you're definitely wrong about this.  If you were to do the research with the google, you would definately find reference that 1 Fermi core = 2 Kepler cores when it comes to comparing performance.  It's because the Fermi products have the shaders hot clocked at TWICE the frequency of the core clock, whereas the Kepler cores have the shaders at a lower clock equal to the clock rate of the core clock.  THIS is why the performance of the 660M is lower than that of the 675M, even when it has the same number of cores, it's nothing to do with memory bandwidth at all!  Check the guru3d website for more information where they review the first Kepler based products, they make reference to the 1 Fermi core = 2 Kepler cores thing I was talking about.

nissangtr786

Quote from: DOH! on July 30, 2012, 12:08:46
Quote from: nissangtr786 on July 30, 2012, 01:01:45
DOH you do realise if the 660m was a 256bit card with same clocks it would embarrass a 675m.

I think you're definitely wrong about this.  If you were to do the research with the google, you would definately find reference that 1 Fermi core = 2 Kepler cores when it comes to comparing performance.  It's because the Fermi products have the shaders hot clocked at TWICE the frequency of the core clock, whereas the Kepler cores have the shaders at a lower clock equal to the clock rate of the core clock.  THIS is why the performance of the 660M is lower than that of the 675M, even when it has the same number of cores, it's nothing to do with memory bandwidth at all!  Check the guru3d website for more information where they review the first Kepler based products, they make reference to the 1 Fermi core = 2 Kepler cores thing I was talking about.

I am amazed that you don't know how big an advantage it is to be a 256bit card over a 128bit card.

here is a noob guide comment for people like you to understand with a quick 'google' search from http://forums.afterdawn.com/t.cfm/f-216/128_bit_vs_256_bit_video_card-902807/  :

Generally speaking a 256-bit option is better. It refers to the width of the memory bus on the card, so a higher memory bus width means more data can be transferred between the memory and the graphics processor, running at the same speed as the lower bus width. So a 256-bit memory bus can deliver twice as much graphics data to the processor as the 128-bit at the same speed.

I know fermi 1 core is 2 for kepler but this is the great thing about kepler is that usinf this technique they can afford to low the clock speed down and anhialate the 675m while taking around 65w less electricity and performing around 80% faster and can oc the card by 50% and still take less power then the 675m and perform basically 120% faster..

Here is a power consumption reading here:
http://www.hardwareluxx.de/images/stories/galleries/reviews/gtx_680m/bench_strom_load.jpg
thread here:
http://forum.notebookreview.com/gaming-software-graphics-cards/675454-new-gtx-680m-review-truth.html

I think the 660m if 256bit would just about edge the 675m.
Look at this business kepler based card that beats the 675m with 256bit by a bit. I know it has more unified shaders then the 660m but the k3000m is clocked a lot slower.

http://www.notebookcheck.net/NVIDIA-Quadro-K3000M.76896.0.html
http://www.notebookcheck.net/NVIDIA-Quadro-K3000M.76896.0.html

IMO buying a 670m over a 660m or 675m over a 680m/7970m is like buying a pentium 4 3.8ghz ht over an intel atom.slowest version power hungry difference wise and performance per watt.

DOH!

Quote from: nissangtr786 on July 31, 2012, 04:12:02
Quote from: DOH! on July 30, 2012, 12:08:46
Quote from: nissangtr786 on July 30, 2012, 01:01:45
DOH you do realise if the 660m was a 256bit card with same clocks it would embarrass a 675m.

I think you're definitely wrong about this.  If you were to do the research with the google, you would definately find reference that 1 Fermi core = 2 Kepler cores when it comes to comparing performance.  It's because the Fermi products have the shaders hot clocked at TWICE the frequency of the core clock, whereas the Kepler cores have the shaders at a lower clock equal to the clock rate of the core clock.  THIS is why the performance of the 660M is lower than that of the 675M, even when it has the same number of cores, it's nothing to do with memory bandwidth at all!  Check the guru3d website for more information where they review the first Kepler based products, they make reference to the 1 Fermi core = 2 Kepler cores thing I was talking about.

I am amazed that you don't know how big an advantage it is to be a 256bit card over a 128bit card.

here is a noob guide comment for people like you to understand with a quick 'google' search from http://forums.afterdawn.com/t.cfm/f-216/128_bit_vs_256_bit_video_card-902807/  :

Generally speaking a 256-bit option is better. It refers to the width of the memory bus on the card, so a higher memory bus width means more data can be transferred between the memory and the graphics processor, running at the same speed as the lower bus width. So a 256-bit memory bus can deliver twice as much graphics data to the processor as the 128-bit at the same speed.

I know fermi 1 core is 2 for kepler but this is the great thing about kepler is that usinf this technique they can afford to low the clock speed down and anhialate the 675m while taking around 65w less electricity and performing around 80% faster and can oc the card by 50% and still take less power then the 675m and perform basically 120% faster..

Here is a power consumption reading here:
http://www.hardwareluxx.de/images/stories/galleries/reviews/gtx_680m/bench_strom_load.jpg
thread here:
http://forum.notebookreview.com/gaming-software-graphics-cards/675454-new-gtx-680m-review-truth.html

I think the 660m if 256bit would just about edge the 675m.
Look at this business kepler based card that beats the 675m with 256bit by a bit. I know it has more unified shaders then the 660m but the k3000m is clocked a lot slower.

http://www.notebookcheck.net/NVIDIA-Quadro-K3000M.76896.0.html
http://www.notebookcheck.net/NVIDIA-Quadro-K3000M.76896.0.html

IMO buying a 670m over a 660m or 675m over a 680m/7970m is like buying a pentium 4 3.8ghz ht over an intel atom.slowest version power hungry difference wise and performance per watt.

For all the research you seem to have done you are surprisingly clueless and lacking in basic understanding of this topic.  All you seem to be doing is coming out with random snippets of facts that you know about graphics cards, which don't bear any relation to our discussion; it's almost like you're cutting and pasting stuff without understanding what you're referring to.  I'm not even going to bother to convince you anymore; afterall, it's not that important really!  No one really cares if you or I are right about this petty argument!  I just hope that you don't work in the tech industry in any shape or form because that would be painful!

Jan Andersen

Lenovo and Intel seems to be among the flop manufacturers of the year 2012.

As to Lenovo, their 2012 notebooks ( except some ThinkPad) continue with glossy screens, where many others have finally gone back to matte screens. Perhaps news travels slow in China. Apparently users have to pay for screen quality ( matte screen ) at Lenovo. As to this notebook, it is a mystery why they insist to stuff in a dedicated graphics card, when the processor comes with a HD 4000, adding further heat and noise to the unit. Why cant manufacturers split ther product lines in gaming and normal/light-gaming ? Instead users are prisoners, having to buy with dedicated graphics card which we do not need - and which in two years and a few months will melt down the unit. Funny that we have to pay extra for NOT having a dedicated graphics card - one really wonder what the real purpose is to stuff in an extra component not needed ?!

As to Intel, they are probably the main flop of 2012, which the new Ivy Bridge processors beeign a flop, which is apparent in also this test - running too hot. Ivy Bridge was supposed to give us cooler, more quiet and thinner devices - but instead we get devices running even hotter than last year, and devices still well above 2.4 kg for a normal 15".


Andy7118

Great review! Waited so long for the review of Y580 since its release. I have some questions regarding Y580:

A) Temperature test showed an average of 30deg C on idle & reaches average of 38deg C on load. It even reaches 50deg C in the middle of the keyboard on load. Considering all the energy-efficient parts such as Ivy bridge/Kepler/copper heat sink, it gets quite hot to the point where it might break the components in short run..Therefore, is it advisable to get the 7200rpm upgrade?

B) Also, could you review the 2012 HP Pavilion dv6t-7000 quad edition with GTX660M? As both 2012 dv6t & Y580 have somewhat similar ports/specs/price, I believed it is a tight competitor to the Y580. Thank you for your time & I hope to hear from you guys. Keep up the good work!

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview