Google’s Chiller-less Data Center

Google (GOOG) has begun operating a data center in Belgium that has no chillers to support its cooling systems. Google’s Chiller-less Data Center is an article that describes Google elimination of chillers in its data center in Belgium. The facility relies entirely on free air cooling. The maximum temperature in Brussels during summer is lower than the temperature Google maintains their data centers. If it for some strange reason gets too hot, Google shuts down the center and shifts the workload elsewhere. This approach makes local weather an issue in network management. The ability to seamlessly shift workloads between data centers also creates intriguing energy management possibilities.

google-cooling1

4 Comments

  1. Burton Haynes says:

    Very informative article… Looking forward for more articles on your blog

    Reply
  2. Facebook datacenter “secrets” « Tomi Engdahl’s ePanorama blog says:

    [...] love for all things open, details of its massive data centers have always been a closely guarded secret. Google usually talks about its servers once they’re [...]

    Reply
  3. Tomi Engdahl says:

    Using warm water for data center cooling
    http://www.csemag.com/single-article/using-warm-water-for-data-center-cooling/c7ec23598a9d71d8731f5b92c4749678.html

    There are many ways to cool a data center. Engineers should explore the various cooling options and apply the solution that’s appropriate for the application.

    Moving toward today’s technology

    One can glean from this information that it wasn’t until the late 20th/early 21st century that computing technology really took off. New processor, memory, storage, and interconnection technologies resulted in more powerful computers that use less energy on a per-instruction basis. But one thing remained constant: All of this computationally intensive technology, enclosed in ever-smaller packages, produced heat—a lot of heat.

    As the computer designers and engineers honed their craft and continued to develop unbelievably powerful computers, the thermal engineering teams responsible for keeping the processors, memory modules, graphics cards, and other internal computer components at an optimal temperature had to develop innovative and reliable cooling solutions to keep pace with this immense computing. For example, modern-day computational science may require a computer rack that houses close to 3,000 cores, which is roughly the equivalent of 375 servers, in one rack. This equates to an electrical demand (and corresponding cooling load) of 90 kW per rack. This will yield a data center with an electrical density of considerably more than 1,000 W/sq ft, depending on the data center layout and the amount of other equipment in the room. With numbers like this, it was clear: Conventional air cooling will not work in this type of environment.

    Current state of data center cooling

    Data center cooling system development, employing the most current and common industry methodologies, range from split-system, refrigerant-based components to more complex (and sometimes exotic) arrangements, such as liquid immersion, where modified servers are submerged in a mineral oil-like solution, eliminating all heat transfer to the ambient air because the circulating oil solution becomes the conduit for heat rejection. Other complex systems, such as pumped or thermo-syphon carbon-dioxide cooling also offer very high efficiencies in terms of volume of heat rejection media needed; 1 kg of carbon dioxide absorbs the same amount of heat as 7 kg of water. This potentially can reduce piping and equipment sizing, and also reduce energy costs.

    Water-based cooling in data centers falls somewhere between the basic (although tried-and-true) air-cooled direct expansion (DX) systems and complex methods with high degrees of sophistication. And because water-based data center cooling systems have been in use in some form or another for more than 60 yr, there is a lot of analytical and historical data on how these systems perform and where their strengths and weaknesses lie. The most common water-based approaches today can be aggregated anecdotally into three primary classifications: near-coupled, close-coupled, and direct-cooled.

    Most of these complications stem from proximity and physical containment; if the hot air escapes into the room before the cold air can mix with it and reduce the temperature, the hot air now becomes a fugitive and the cold air becomes an inefficiency in the system. In all air-cooled data centers, a highly effective method for reducing these difficulties is to use a partition system as part of an overall containment system that physically separates the hot air from the cold air, allowing for a fairly precise cooling solution.

    Reply
  4. Tomi Engdahl says:

    Think Big-Picture ‘Hyperscale’ Cooling
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1328719&

    When you have megawatts to dissipate, cooling is about a lot more than heat sinks and other localized heat-removal approaches.

    Most electronic, mechanical, and thermal engineers are concerned with keeping the temperature of their IC or printed circuit board below some maximum allowable value. Others are more worried about the overall enclosure, which can be range from a self-contained package such as a DVR to a standard rack of boards and power supplies.

    Basic techniques for getting heat from an IC, board, or enclosure involve one or more of heat sinks, heat spreaders (PC-board copper), head pipes, cold plates, and fans; it can sometimes move up to more-active cooling approaches including air conditioning or embedded pipes with liquid flow. That’s all well and good, but obviously not good enough for the megawatts of a “hyperscale” data center. (If you are not sure what a hyperscale data center is, there’s a good explanation here). While there is no apparent formal standard on the minimum power dissipation to be considered hyperscale, you can be sure it’s in the hundreds of kilowatt to megawatt range.

    But where does all that heat go? Where is the “away” to which the heat is sent? If you’re cooling a large data center, that “away” to hard to get to, and doesn’t necessarily want to take all that heat you are dissipating.

    A recent market study from a BSRIA offered some insight in the hyperscale data-center cooling options and trends. I saw a story on the report in the November issue of Cabling Installation & Maintenance, a publication which gives great real-world perspective into the nasty details of actually running all those network cables, building codes, cabling standards, and more. (After looking through this magazine you’ll never casually say, it’s “no big deal, it’s just an RJ-45 connector” again.)

    BSRIA summarized their report and used a four-quadrant graph (below) of techniques versus data-center temperatures to clarify what is feasible and what is coming on strong. Among the options are reducing dissipation via variable-speed drives and modular DC supplies, cooling techniques from liquid cooling to adiabatic evaporative cooling, or allowing a rise in server-inlet temperature. The graph also shows the growth potential versus investment level required for each approach; apparently, adiabatic/evaporative cooling is the “rising star.”

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*