News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Microsoft debuts liquid cooling for datacenters at a US development facility

Started by Redaktion, April 10, 2021, 17:45:15

Previous topic - Next topic

Redaktion

Microsoft has implemented what it calls the world's first liquid-cooling system for high-performance computing (HPC), at a datacenter located in Washington, US. This prototype rig type uses a custom-designed, electronics-safe dielectric fluid rather than water, which means the servers in question can be immersed in it directly. This, as the OEM asserts, confers various advantages over alternatives such as air-cooling.

https://www.notebookcheck.net/Microsoft-debuts-liquid-cooling-for-datacenters-at-a-US-development-facility.531599.0.html

_MT_

The only thing new about this might be the liquid used. If they used something new. Others have played with immersion before and 3M has had a product for that application for some time (I remember someone from 3M giving a talk on this very subject in Paris, 2012 I think). Direct liquid cooling might be fairly rare in datacentres, but it exists. For example, Supermicro makes a liquid cooled GPU server. Mellanox (now Nvidia) makes a liquid cooled switch (a milion dollar, 800 port beauty). Liquid cooling has been employed in computing since something like the '60s. Although typically used in mainframes and supercomputers, not your run of the mill HPC (the traditional methods can be complex and expensive since datacentres and leaks don't mix). There were liquid cooled blade servers about a decade ago, made by IBM and HP IIRC.

S.Yu


Robothales

2-phase immersion sux, get good

And it's not the first immersion cooling in HPC even

_MT_

Quote from: S.Yu on April 12, 2021, 00:27:59
Drain the liquid before opening the lid, even I could think of that.
A tank is going to contain several servers. In this case, I believe there are about 40 blades. Draining the tank would mean you have to shut down all of them first. That's not what you want. That's like shutting down a whole rack to work on a single server. And in that case, simply switching them off should rapidly reduce evaporation in itself. Or just removing load from them. A server can take very long to start up (I have seen servers that take over half an hour). Frankly, it might be better to just leave it be and allow problems to accumulate.

I recall that 3M demonstrator from 2012 reportedly had very small losses from hot-swapping. I don't think they quantified it. They also claimed servers can be pulled out dry. In their case, the heat exchanger was mounted well bellow the lid, off to the side. And the whole thing was operating at atmospheric pressure. There should have been a layer of air beneath the lid. Also, they mentioned they had an extra heat exchanger just for hot-swapping. It was mounted above the primary, close to the lid I believe.

I did design an immersion cooling solution over 10 years ago. But I was primarily concerned about creating a front-loading system rather than a top-loading bath. It seemed like an interesting challenge given that liquids will pool down due to gravity. Horizontal baths struggle to utilize vertical space which is typically abundant in datacentres. Not that lifting heavy servers up is fun.

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview