How Clean is Your Cloud and Telecom?

Greenpeace report How Clean is Your Cloud? I saw mentioned in 3T magazine news is actually quite interesting reading. This year’s report provides a look at the energy choices some of the largest and fastest growing IT companies. The report analyzes the 14 IT companies and the electricity supply chain in more than 80 data center cases.


The report contains also lots of interesting background information on both IT and telecom energy consumption. I recommend checking it out. Here are some points picked from How Clean is Your Cloud? report:

Facebook, Amazon, Apple, Microsoft, Google, and Yahoo – these global brands and a host of other IT companies are rapidly and fundamentally transforming the way in which we work, communicate, watch movies or TV, listen to music, and share pictures through “the cloud.”

The growth and scale of investment in the cloud is truly mind-blowing, with estimates of a 50-fold increase in the amount of digital information by 2020 and nearly half a trillion in investment in the coming year, all to create and feed our desire for ubiquitous access to infinite information from our computers, phones and other mobile devices, instantly.

The engine that drives the cloud is the data center. Data centers are the factories of the 21st century information age, containing thousands of computers that store and manage our rapidly growing collection of data for consumption at a moment’s notice. Given the energy-intensive nature of maintaining the cloud, access to significant amounts of electricity is a key factor in decisions about where to build these data centers. Industry leaders estimate nearly $450bn US dollars is being spent annually on new data center space.

Since electricity plays a critical role in the cost structure of companies that use the cloud, there have been dramatic strides made in improving the energy efficiency design of the facilities and the thousands of computers that go inside. However, despite significant improvements in efficiency, the exponential growth in cloud computing far outstrips these energy savings.

How much energy is required to power the ever-expanding online world? What percentage of global greenhouse gas (GHG) emissions is attributable to the IT sector? Answers to these questions are very difficult to obtain with any degree of precision, partially due to the sector’s explosive growth, a wide range of devices and energy sources, and rapidly changing technology and business models. The estimates of the IT sector’s carbon footprint performed to date have varied widely in their methodology and scope. One of the most recognized estimates of the IT sector’s footprint was conducted as part of the 2008 SMART 2020 study, which established that the sector is responsible for 2% of global GHG emissions.

The combined electricity demand of the internet/cloud (data centers and telecommunications network) globally in 2007 was approximately 623bn kWh (if the cloud were a country, it would have the fifth largest electricity demand in the world). Based on current projections, the demand for electricity will more than triple to 1,973bn kWh (an amount greater than combined total demand of France, Germany, Canada and Brazil).

The report indicates that, due to the economic downturn and continued energy efficiency and performance improvements, global energy demand from data centers from 2005-2010 increased by 56%. Estimates of data center electricity demand come in at 31GW globally, with an increase of 19% in 2012 alone. At the same time global electricity consumption is otherwise essentially flat due to the global recession is still a staggering rate of growth.

Given the scale of predicted growth, the source of electricity must be factored into a meaningful definition of “green IT”. Energy efficiency alone will, at best, slow the growth of the sector’s footprint. The replacement of dirty sources of electricity with clean renewable sources is still the crucial missing link in the sector’s sustainability efforts according to the report.


The global telecoms sector is also growing rapidly. Rapid growth in use of smart phones and broadband mobile connections mean mobile data traffic in 2011 was eight times the size of the entire internet in 2000. It is estimated that global mobile data traffic grew 133% in 2011, with 597 petabytes of data sent by mobiles every month. In 2011, it is estimated that 6 billion people or 86.7% of the entire global population have mobile telephone subscriptions. By the end of 2012, the number of mobile connected devices is expected to exceed the global population. Electronic devices and the rapidly growing cloud that supports our demand for greater online access are clearly a significant force in driving global energy demand.

What about telecoms in the developing and newly industrialized countries? The report has some details from India (by the way it is expected that India will pass China to become the world’s largest mobile market in terms of subscriptions in 2012). Much of the growth in the Indian telecom sector is from India’s rural and semi-urban areas. By 2012, India is likely to have 200 million rural telecom connections at a penetration rate of 25%. Out of the existing 400,000 mobile towers, over 70% exist in rural and semi-urban areas where either grid-connected electricity is not available or the electricity supply is irregular. As a result, mobile towers and, increasingly, grid-connected towers in these areas rely on diesel generators to power their network operations. The consumption of diesel by the telecoms sector currently stands at a staggering 3bn liters annually, second only to the railways in India.

What is the case on other developing and newly industrialized countries? I don’t actually know.

NOTE: Please note that that many figures given on the report are just estimates based on quite little actual data, so they might be somewhat off the actual figures. Given the source of the report I would quess that if the figures are off, they are most probably off to direction so that the environmental effect looks bigger than it actually is.


  1. Tomi Engdahl says:

    48V direct-conversion dramatically improves data-center energy efficiency

    It’s easy to summarize the power needs and costs of data centers and servers in a single word: enormous. Of course, there’s much more to the story than this. These critical network hubs – which are now woven deeply into society’s infrastructure – require megawatts to function, resulting in very high power-related direct-operating costs. Those costs are further extended by the costs associated with dissipating all the associated heat the equipment generates.

    Consider a representative 5000 ft2 (1500 m2) server/data center. It uses about 1 MW, with a power usage efficiency (PUE) rating between 1.2 and 2

    These PUE numbers means that non-core losses range from about 20% to 100% above the basic operating requirements. The higher the PUE, the higher the total cost of ownership (TCO), and depending how it is defined, PUE may not even directly account for the costs of getting rid of all the power that is wasted and transformed into heat, and somehow must be removed. A lower PUE also directly affects associated CO2 emissions and carbon footprint, and so has regulatory implications.

    The challenge in reducing PUE is that there is no dominant source of loss in the server or data center. Instead, the losses are spread along the entire power-distribution chain, starting with the primary power source AC supply, down to the low-voltage DC which is supplied to individual ICs

    cumulative sources of inefficiency as power passes from line mains through 48VDC/12VDC converters and multiple 12V DC/single-digit rail supplies.

    Losses add up quickly

    Simple math shows the impact of cumulative losses along the power path. Assume there are four stages between the 480 VAC/DC mains and the ultimate low-voltage rails, each with efficiency of 90% (actual numbers will vary for each stage, of course). The end-to-end efficiency is the product of these individual efficiencies, and drops down to just 65.6% – a substantial loss.

    What can be done? The “obvious” answer is to improve the efficiency of each stage, and that has been the dominant strategy. If each of those five 90% ratings can be boosted to 92%, the overall efficiency will increase to about 71.6%.

    A system which is 90% efficient is clearly 10% inefficient. Even a 1% improvement is a huge gain:

    From AC mains/400VDC → 48VDC → 12VDC PoL → single-volt rails

    Historically, the power path has used an intermediate voltage of 48VDC, which then feeds numerous 12V point of load (PoL) DC/DC converters that produce the specific end-use rail voltages, such a 12V, 5V, 3.3V, 1.2V, and even sub-1V. This topology worked well, and improvements in efficiency in the intermediate converter stages and the PoL units made it a successful approach which has lasted for many years.

    Direct conversion offers better approach

    Fortunately, a new approach called direct conversion offers a path out of the dilemma. If you completely eliminate one of the power-conversion stages, such as the 48V/12V intermediate stage, and instead go directly from 48V DC to the low-voltage rails, the impact would be both significant. Looking at the four-stage 90% example again, going to just three 90% stages improves efficient from 65.6% to 72.9%

    There’s another very good reason to skip the 12V intermediate stage: the bus bar behind the rack brings hundreds of amperes to the server boards at 12V. The associated losses, which are already high, are becoming even more significant as these current levels continue to increase. Increasing the distribution voltage to 48V greatly reduces these bus-bar distribution losses. Using 48V as the distribution voltage is a reasonable compromise between the need to decrease the losses and the safety regulations which begin at 60V. Also, 48V distribution is compatible with distributed uninterruptible power supplies (UPS) where the energy storage unit (typically a 42-to-48V battery) is located close to the rack, rather than at a centralized UPS sited far from the equipment.

    Of course, it is easy to propose direct conversion; it is actually hard to execute. Several manufacturers have devised “partial” solutions.

    ST’s three-IC solution embodies advanced concepts

    To allow power-system architects to realize the benefits of direct conversion, ST developed a multi-IC solution with what is called Isolated Resonant Direct Conversion technology, along with the critical infrastructure which supports it.

    There’s no question that the existing multistage power-conversion chain has worked well, but its time has come to a close. It’s no longer sufficient for the task of meeting the efficiency needs and growing server/data center power demands. Further, it cannot meet the VR13 specification, lacks scalability and flexibility, and is not highly efficient across all load ranges.

    That’s why the multichip direct-conversion solution developed by STMicroelectronics, featuring power conversion from 48VDC directly down to the individual IC rail voltages, is a better solution.

  2. Tomi Engdahl says:

    Got Energy?

    Why everyone needs to start taking power more seriously, and what you can do about it.

    Energy is a finite resource, which means it’s not someone else’s problem. It’s everyone’s problem.

    This isn’t just another doom and gloom prediction. Energy consumption has been rising steadily for decades. Unfortunately, it has been increasing at a faster rate than energy production. A Semiconductor Industry Association report entitled, “Rebooting the IT Revolution: A Call to Action,” says we could run out of energy to power computers by 2040.

    So what can we do about it? There are ways to save power significant amounts of energy at the system component and sub-system level.

    keeping computers running takes the equivalent of 30 large power plants. The real problem, according to the report, is that power is wasted when computers sit idle—particularly the ones that are plugged into the wall. The group argues that implementing new standards could save U.S. consumers $3 billion a year.

    It’s not just computers, though. All electronics can benefit from better energy management. Just as cars idling in traffic burn fuel, so do electronics. And as more devices are added, particularly those that are always on, the more energy will be wasted.

    So where do you stand on power?

  3. Tomi Engdahl says:

    Power/Performance Bits: Oct. 11

    Data center on chip

    Researchers from Washington State University and Carnegie Mellon University presented a preliminary design for a wireless data-center-on-a-chip at the Embedded Systems Week conference in Pittsburgh.

    Data centers are well known as energy hogs, and they consumed about 91 billion kilowatt-hours of electricity in the U.S. in 2013, which is equivalent to the output of 34 large, coal-fired power plants, according to the National Resources Defense Council. One of their major performance limitations stems from the multi-hop nature of data exchange.

    In recent years, the group designed a wireless network on a computer chip

    The new work expands these capabilities for a wireless data-center-on-a-chip. In particular, the researchers are moving from two-dimensional chips to a highly integrated, three-dimensional, wireless chip at the nano- and microscales that can move data more quickly and efficiently.

    The team believes they will be able to run big data applications on their wireless system three times more efficiently than the best data center servers.

    Wireless data-center-on-a-chip aims to cut energy use

    Personal cloud computing possibilities

    As part of their grant, the researchers will evaluate the wireless data center to increase energy efficiency while also maintaining fast, on-chip communications. The tiny chips, consisting of thousands of cores, could run data-intensive applications orders of magnitude more efficiently compared to existing platforms. Their design has the potential to achieve a comparable level of performance as a conventional data center using much less space and power.

    It could someday enable personal cloud computing possibilities, said Pande, adding that the effort would require massive integration and significant innovation at multiple levels.

    “This is a new direction in networked system design,’’ he said. “This project is redefining the foundation of on-chip communication.”

  4. Tomi Engdahl says:

    California Computer Efficiency Standard Nears Finish Line

    Is California ready to approve the nation’s first mandatory efficiency standard for computers, monitors and signage displays?

    It appears so. Earlier this year, I wrote about the California Energy Commission’s (CEC) initial proposed requirements (California Continues Drive for Computer and Display Efficiency). After a good deal of discussion with industry stakeholders, resulting in some specification and timing modifications, the CEC has now published their Efficiency Rulemaking Express Terms, which they hope will to be the final regulatory language approved by year end. The Commission believes that consumers and businesses will save over $370 million in energy costs from these standards.

    Covered in this computer regulation are desktops, notebooks (including mobile gaming systems), thin-clients, small-scale servers, and workstations. Excluded are tablets, smartphones, game consoles, handheld gaming devices, servers other than small-scale units, and industrial computers. The CEC believes that the core opportunity for computer energy savings is in limiting the unit’s energy consumption during non-productive idle, standby, and off modes.

    A computer’s maximum allowable annual energy consumption is determined partly by its expandability score (ES). ES is used to correlate the power supply sizing necessary for a computer to provide the required power to the core system plus any potential expansions

  5. Tomi Engdahl says:

    Hyperscale data centers make commitments to renewable energy

    An often-cited nine-year-old report from the United States Environmental Protection Agency (EPA) estimated that data centers accounted for approximately 1.5 percent of the country’s total electricity consumption. The EPA’s “Report to Congress on Server and Data Center Efficiency,” published in August 2007, was written in response to the U.S. Congress’s Public Law 109-431, which requested such a report. The EPA said the 133-page document “assesses current trends in energy use and energy costs of data centers and servers in the U.S. and outlines existing and emerging opportunities for improved energy efficiency.”

    Based on data gathered through 2006, the report states, “The energy used by the nation’s servers and data centers is significant. It is estimated that this sector consumed about 61 billion kilowatt-hours (kWh) in 2006 (1.5 percent of total U.S. electricity consumption) for a total electricity cost of about $4.5 billion. This estimated level of electricity consumption is more than the electricity consumed by approximately 5.8 million average U.S. households (or about five percent of the total U.S. housing stock).”

    The report detailed practices that could be taken to avoid a continued escalation of electricity consumption by data centers. In a practical sense, the report also served to kick off the EPA’s EnergyStar program for data center facilities and, later, for data center equipment. From a public relations standpoint, the report cast data centers as energy hogs. In doing so, it branded the largest data centers-frequently referred to as hyperscale data centers-as hyperconsumers of electricity.

    “When it comes to sustainability, we’ve made important progress as a company since the start of this decade, but even more important work lies ahead … We need to keep working on a sustained basis to build and operate greener data centers that will serve the world well. For Microsoft, this means moving beyond data centers that are already 100-percent carbon neutral to also having those data centers rely on a larger percentage of wind, solar and hydropower electricity over time. Today roughly 44 percent of the electricity used by our data centers comes from these sources. Our goal is to pass the 50-percent milestone by the end of 2018, top 60 percent early in the next decade, and then to keep improving from there.”

    Another hyperscale data center owner, Google, similarly addressed its use of renewable energy and contractual agreements that help it drive down its carbon footprint. On its website the company explains, “Across Google, we’re currently using renewable energy to power over 35 percent of our operations. We’re committed to using renewable energy like wind and solar as much as possible. So why don’t we build clean energy sources right on our data centers? Unfortunately, the places with the best renewable power potential are generally not the same places where a data center can most efficiently and reliably serve its users. While our data centers operate 24/7, most renewable energy sources don’t-yet. So we need to plug into the electricity grid, and the grid isn’t currently very green. That’s why we’re working to green the electricity supply as a whole-not just for us, but for everyone.

    Facebook-another often-cited owner/operator of hyperscale data centers-recently announced some renewable-energy initiatives associated with its under-construction facility in Clonee, County Meath, Ireland.

  6. Tomi Engdahl says:

    Modular data center design helps Oracle achieve sustainability goals
    The project’s design-collaboration approach was highly successful.

    With 2014 revenue at $38 billion and more than 130,000 employees companywide, Oracle has plans. Big plans. These included reaching their internal information technology (IT) growth targets for 2014 with a forward-looking plan to support IT growth needs for the next several years. Accordingly, Oracle moved forward with the build-out of the UCF Phase 2 Data Center space in West Jordan, Utah. The Glumac-led design team master-planned the UCF Phase 2 space to provide for 30,000 sq ft of highly reliable and highly available Tier III space.

    With more than 2 decades of experience implementing sustainable energy-, water-, and waste-management practices, Oracle has set some high sustainability goals for themselves. By 2016, Oracle targets a 10% reduction in energy use per employee and 6% improvement in power-usage effectiveness (PUE) in production data centers. The Oracle Utah Compute Facility Cell 2.1 in West Jordan is one part of this pursuit.

    The results were clear. The Oracle UCF Cell 2.1 Phase 2 project was completed on time and on budget, meeting the user’s requirements. The PUE of the space was calculated to be 1.22, which is a 24% reduction as compared with Utah industry averages of 1.60.

    Further, the low PUE will help Oracle’s original sustainability goals of achieving 6% reduction in data center PUE and will save $250,000/year as compared with a similar facility built to Utah energy code standards.

    The Oracle Utah Compute Facility Cell 2.1 demonstrates that data centers can be both sustainable and reliable.

  7. Tomi Engdahl says:

    Utilizing GaN transistors in 48V communications DC-DC converter design

    As the world’s demand for data increases seemingly out of control, a real problem occurs in the data communications systems that have to handle this traffic. Datacenters and base stations, filled with communications processing and storage handling, have already stretched their power infrastructure, cooling, and energy storage to their limits. However, as the data traffic continues to grow, higher density communications and data processing boards are installed, drawing even more power. In 2012, communications power consumptions of networks and datacenters added up to 35% of the overall electricity use in the ICT sector (Figure 1). By 2017, networks and datacenters will use 50% of the electricity, and it will continue to grow.

  8. Tomi Engdahl says:

    Urs Hölzle / The Keyword:
    Google says it will run its global operations entirely on renewable energy in 2017, claims it is the world’s largest corporate buyer of renewable power — Every year people search on Google trillions of times; every minute people upload more than 400 hours of YouTube videos.

  9. Tomi Engdahl says:

    Microsoft is going to run a data center entirely on wind power

    The company just announced that it has inked deals with two wind farms, with the aim of entirely powering its Cheyenne, Wyoming data center from renewable sources. Microsoft has reportedly contracted Bloom Wind farm in Kansas to provide 178 megawatts, and the Silver Sage and Happy Jack farms in Wyoming to provide an additional 59 megawatts.

    As noted at TheNextWeb, ” Microsoft has also revealed that the site’s backup generators will be used as a ‘secondary resource’ for the local grid. This means they will actually provide energy to the local community during periods of high demand. These backup generators will burn natural gas, which, despite being a fossil fuel, is far less ecologically damaging than diesel.

  10. Tomi Engdahl says:

    EPA begins process to improve computer server efficiency

    The U.S. Environmental Protection Agency (EPA) is aiming to improve the energy efficiency of future computer servers. A few months ago, the agency published Draft 1, Version 3 of its ENERGY STAR Computer Server Specification.

    In order to be eligible for the program, a server must meet all of the following criteria:

    Marketed and sold as a computer server
    Packaged and sold with at least one AC-DC or DC-DC power supply
    Designed for and listed as supporting at least one or more computer server operating systems and/or hypervisors
    Targeted to run user-installed enterprise applications
    Provide support for ECC and/or buffered memory
    Designed so all processors have access to shared system memory and are visible to a single OS or hypervisor

    Excluded products include fully fault tolerant servers, server appliances, high performance computing systems, large servers, storage products including blade storage, and network equipment.

  11. Tomi Engdahl says:

    Jacob Kastrenakes / The Verge:
    California adopts energy standards requiring idle computers to draw less power; energy commission estimates 6% of desktops, 73% of laptops meet standards — California became the first state in the US to approve energy efficiency requirements for laptops, desktops, and monitors today …

    California approves first US energy efficiency standards for computers

    California became the first state in the US to approve energy efficiency requirements for laptops, desktops, and monitors today, in a change that could ultimately impact computers’ energy efficiency across the country.

    The new standards, approved by California’s Energy Commission, require most computers to draw less power while idle. Laptops are only required to see a slight reduction in power draw, since they’re already designed to be energy efficient; the commission estimates that 73 percent of shipping laptops won’t need any sort of change.

  12. Tomi Engdahl says:

    Japan’s research institution RIKEN once again captured the top spot on the Green500 list with its Shoubu supercomputer, the most energy-efficient system in the world. With rating of 6673.84 MFLOPS/Watt, Shoubu edged out another RIKEN system, Satsuki, the number 2 system that delivered 6195.22 MFLOPS/Watt.

    Both are “ZettaScaler”supercomputers, employing Intel Xeon processors and PEZY-SCnp manycore accelerators.

    The 3rd most energy-efficient system is China’s Sunway TaihuLight, which currently holds the number 1 spot on the TOP500 list as the world’s fastest supercomputer. It is powered solely by Sunway’s SW26010 processors and represents the first homogenous supercomputer in the top 10 of the Green500 since a set of IBM Blue Gene/Q systems occupied six of the top 10 spots in June 2013.

    The Satsuki and TaihuLight supercomputers are the only new entries in the top 10. Overall, there are 157 new systems in the June 2016 edition of the Green500, representing nearly a third of the list.

  13. Tomi Engdahl says:

    IEEE says zero hot air in Fujitsu liquid immersion cooling for data centers

    Given the prodigious heat generated by the trillions of transistors switching on and off 24 hours a day in data centers, air conditioning has become a major operating expense. Consequently, engineers have come up with several imaginative ways to ameliorate such costs, which can amount to a third or more of data center operations.
    One favored method is to set up hot and cold aisles of moving air through a center to achieve maximum cooling efficiency. Meanwhile, Facebook has chosen to set up a data center in Lulea, northern Sweden on the fringe of the Arctic Circle to take advantage of the natural cold conditions there; and Microsoft engineers have seriously proposed putting server farms under water.

    Fujitsu, on the other hand, is preparing to launch a less exotic solution: a liquid immersion cooling system it says will usher in a “next generation of ultra-dense data centers.”

    Fujitsu Liquid Immersion Not All Hot Air When It Comes to Cooling Data Centers

  14. Tomi Engdahl says:

    GaN Technology: A Lean, Green (Power) Machine

    Sponsored by: Texas Instruments. The devil is in the details when designing a Titanium-grade power supply with gallium-nitride technology, from driver circuits and new power design topologies to digital control schemes and new product qualification tests.

    Electricity is the world’s fastest-growing form of end-use energy consumption. The U.S. Energy Information Administration (EIA) estimates that worldwide generating capacity will grow to 36.5 million megawatt-hours by 2040, a 69% increase from 2012, driven by rising incomes in China, India, and other emerging Asian economies. Electricity generation in the U.S. will grow 24% by 2040—about 1% annually.

    Houston, we got a problem…. the EIA also estimates that some 6% of electricity generated in the U.S. goes to waste in supply and disposition—more than 14 million megawatt-hours annually at current rates of consumption. Reducing just a portion of this waste through efficiency improvements could make it possible to slow the growth of demand, and accelerate the closing of inefficient and polluting coal-fired power plants.

    As a result, governments and regulatory agencies worldwide are moving to implement standards for energy efficiency.

    The 80 Plus standards, now part of Energy Star in the U.S., cover computer power supplies.

    the latest Titanium standard requires a maximum efficiency of up to 96% from ac input to dc output.

    Meeting these new standards requires rethinking every building block in a power supply, and GaN technology is playing an increasing role.

  15. Tomi Engdahl says:

    3 end-user segments drive global green data center market

    A new market study released by Technavio forecasts the global green data center market to reach USD $55 billion by 2021, growing at a CAGR of almost 14 percent. The technology analyst’s new report, “Global Green Data Center Market 2017-2021,” states that the market is witnessing most growth through the construction of data center by cloud service providers (CSPs), colocation service providers, and telecommunication providers globally.

    “Currently, there are many enterprises and CSPs that are involved in colocation spaces rather than constructing their own facilities to address demands immediately,” adds the firm. “CSPs and ISPs, such as Facebook, are the major contributors to the green data center market, which will continue throughout the forecast period.”

    In the report, Technavio’s ICT research analysts categorize the global green data center market into the following segments by end-user: IT infrastructure; power solutions; general construction; cooling solutions; and monitoring and management. The top three end-user segments are thusly discussed by the report’s summary.

    IT infrastructure: Technavio contends that “digitalization has enabled several organizations to adopt cloud-based services for their businesses. By 2020, it is expected that 90 percent of small and medium enterprises will operate their businesses through cloud storage either by colocating their infrastructure or by adopting cloud offerings by the major CSPs in the market.”

    Power solutions: The firm expects the global green data center market “will witness a significant growth in revenue because of the increasing concerns regarding the costs incurred due to increased power consumption and wastage of power in data center operations. There is more focus on reducing the environmental impact of the data center facilities along with power consumption,”

    General construction: Technavio expects the general construction market will grow along with the growth in brick-and-motor facility and modular data center construction projects worldwide. “Increased power consumption and carbon emission have resulted in the construction of eco-friendly data centers. Most of these data centers are being constructed in remote areas,” adds Sharma.

  16. Tomi Engdahl says:

    Making Energy-Efficient ICs Energy Efficiently

    Semiconductor manufacturers are able to make an increasingly important contribution to ensuring that end products use the minimal amount of energy and are efficient.

    There is significant global focus on energy efficiency, and we are all encouraged to use less energy and make energy-efficient choices, whether it be a new washing machine or considering the overall energy efficiency of the buildings in which we live and work.

    Semiconductor manufacturers are able to make an increasingly important contribution to ensuring that end products from cars to vacuum cleaners and laptops to factory automation equipment use the minimal amount of energy and are efficient. Indeed, ON Semiconductor’s mantra is “energy-efficient innovations.”

    To put it in perspective, worldwide energy consumption was over 20 petawatt-hours (PWh) in 2015; that’s equivalent to $2.4 trillion. Of the energy consumed, around 50 percent was by electric-motor-driven systems, which are, in turn, controlled, managed and regulated by semiconductor devices.

    Our increasingly “electrified” world means that, despite all of the products we use becoming more efficient, the net requirement for power is still on the increase; in fact, some estimates suggest that demand will have grown by 55 percent between 2005 and 2030.

    Electronics technology in general — and, most notably, semiconductors — have been a great enabler in recent years for making existing iterations of everything from notebook computers to washing machines more frugal when it comes to their power requirements and, in so doing, placing less demand on the grid and, therefore, power generation itself.

    But it mustn’t be overlooked that the actual process of making semiconductors can be extremely resource-hungry.

  17. Tomi Engdahl says:

    Green data center market: Opportunities and forecast, 2016-2023

    The green data center is a warehouse for storage, administration, and distribution of data in which electrical and computer systems are used to minimize power and carbon footprint. The construction and operation of a green data center includes progressive technologies and strategies that help IT organizations to reduce environmental impact by gauging, scheduling, and implementing initiatives around the data center environment.

    Green Data Center Market – Opportunities and Forecast, 2016-2023

  18. Tomi Engdahl says:

    major market players such as Digital Realty Trust, Inc., IBM Corporation, Hitachi Ltd., Cisco System, Inc., Hewlett-Packard Inc., DuPont Fabros Technology, CyrusOne, Eaton Corporation, Dell Inc., and EMC Corporation

  19. Tomi Engdahl says:

    First self-powered data center opens

    Aruba S.p.A. operates its zero-impact data center using ‘a river of energy’ hydroelectric plant, solar panels and chilling underground water.

    What does it take to open the world’s first self-powered data center? For Aruba S.p.A., it involved three elements:

    Flowing river water
    Photovoltaic solar panels
    Always cold, pumped-to-the-surface underground water as the principal cooling source

    Aruba’s newest data center, named the Global Cloud Data Center (IT3) is located near Milan, Italy, and claims to be 100 percent green. The 49-acre ANSI/TIA-942 Rating 4 standard facility (at 200,000 square meters) opened earlier this month.

    Low-impact credentials at the site come largely because the data center has its own dedicated hydroelectric plant.

    The system, along with the power from solar panels, can produce up to 90 MWs of power. The “river of energy” flows “more or less” constantly, Aruba says.

    Geothermal cooling

    The company says cooling at the facility is also zero-impact.

    “Using groundwater as the main cooling energy source enables us to reduce energy waste,” the company explains of its geothermal system on its website.

    That’s in part because underground water at the site always remains at 48 degrees Fahrenheit throughout the year. That cold water is pumped up from the ground, used to cool the data halls via heat exchangers, and then returned back into the earth. By doing that, environmental impacts do not happen, the firm says.

    Other eco-friendly techniques are in operation, too: Distinct ducts in the server rack design aid efficiency by targeting underground-cooled air onto the parts of the rack that need cooling the most. Also, double insulation with a defrost system is used in the data room construction.

    The largest, state-of-the-art data center campus in Italy

    The Global Cloud Data Center is a data center campus with a surface area of 200,000m2 in Ponte San Pietro (BG), just a few minutes from Milan. All the systems have been designed and built to meet and exceed the highest levels of resilience set by ANSI/TIA 942-A Rating 4 (formerly Tier 4).

    A surface area of 90,000m2 dedicated to the data center in a total area of 200,000m2
    Maximum logical and physical security, with armed guards 24/7 and 7 different security perimeters
    Up to 90MW of power, with self-produced hydroelectric and photovoltaic energy
    Double multi-modular power center with UPS boasting 2N + 1 redundancy
    Made-to- measure power of up to 40kW per rack
    Redundant emergency generators with 48-hour full-load autonomy without refuelling
    Data hall made entirely of firewalls and ceiling with double insulation
    Carrier neutral data center with optional managed connectivity
    Made-to-measure colocation solutions: from rack units to a dedicated data center
    Storage and office space available to customers

  20. Tomi Engdahl says:

    Space-radiated cooling cuts power use 21%

    Radiative sky cooling sends heat from buildings out into space to be chilled. Electricity use ultimately will be slashed compared to traditional air conditioning, scientists say.

    Using the sky as a free heat sink could be a solution to an impending energy crunch caused by increased data use. More data generated in the future will require evermore electricity-intensive cooling — the data centers will be getting bigger.

    Researchers at Stanford University think they have a solution to cooling creep. They say the way to reel in the cost of getting buildings cold enough for all the servers is to augment land-based air conditioning by sending excess heat into space and chilling it there.

    The scientists say cost savings will be in the order of 21 percent through a system they’ve been working on, and up to 70 percent, theoretically, by combining the kit with other, newer radiant systems, according to an article in IEEE Spectrum

    Efficient Air-Conditioning Beams Heat Into Space

    The Stanford team’s passive cooling system chills water by a few degrees with the help of radiative panels that absorb heat and beam it directly into outerspace. This requires minimal electricity and no water evaporation, saving both energy and water. The researchers want to use these fluid-cooling panels to cool off AC condensers.

    They first reported their passive radiative cooling idea in 2014. In the new work reported in Nature Energy, they’ve taken the next step with a practical system that chills water. They’ve also established a startup, SkyCool Systems, to commercialize the technology.

    The team tested it on a rootop on the Stanford campus. Over three days of testing, they found that water temperatures went down by between 3- and 5 °C. The only electricity it requires is what’s needed to pump water through the copper pipes. Water that flowed more slowly was cooled more.

    New radiant cooling systems, which use chilled water running through aluminum panels or pipes, are getting more common in Europe and China and in high-efficiency buildings in the U.S., says Raman. “If we could couple our system with such radiant cooling systems, we could get 70 percent efficiency savings.”

  21. Tomi Engdahl says:

    The Environmental Cost of Internet Porn

    So many people watch porn online that the industry’s carbon footprint might be worse now that it was in the days of DVDs and magazines.

    Online streaming is a win for the environment. Streaming music eliminates all that physical material—CDs, jewel cases, cellophane, shipping boxes, fuel—and can reduce carbon-dioxide emissions by 40 percent or more. Video streaming is still being studied, but the carbon footprint should similarly be much lower than that of DVDs.

    Scientists who analyze the environmental impact of the internet tout the benefits of this “dematerialization,” observing that energy use and carbon-dioxide emissions will drop as media increasingly can be delivered over the internet. But this theory might have a major exception: porn.

    Is pornography in the digital era leaving a larger carbon footprint than it did during the days of magazines and videos?

    But if pornography experts’ estimates are accurate, they suggest a rare scenario where digitization might have increased the overall consumption of porn so much that the principal of dematerialization gets flipped on its head. The internet could allow people to spend so much time looking at porn that it’s actually worse for the environment.

  22. Tomi Engdahl says:

    DCD>Energy Smart highlights EU data center heat recovery

    DCD’s event at the Stockholm Brewery Conference Center in March 2018, will help the digital infrastructure industry deal with rising energy demands.

    Recycling generates profits

    “Through the adoption of various energy efficiency measures, the data center industry together with the energy utilities can build scalable, flexible, and green data centers which are dynamic in their infrastructure,” says Jan Sjögren, head of global ICT centers building operations at Ericsson who will be speaking at the event. “There is a great opportunity for the data center to recycle their waste heat, where we can potentially save on energy cost whilst generating profits as producers.”

    Across Europe, cities such as Amsterdam, Paris, Odense, Dresden and Stockholm are betting big on this approach, creating a new business model for the tech industry worldwide.

    “Rising energy consumption is of great concern to the data center industry. The trend towards utilizing clean energy will redefine future data center location strategies. With the rise of edge computing, we will see a network of distributed compute. Edge compute close to dense population areas, and large data centers in close proximity to power plants, with re-use of energy, will increasingly benefit operators,” says Tor Björn Minde, CEO at RISE SICS North AB, who will be speaking on this topic at the event.

    “Data centers seek solutions to increase energy efficiency and lower cost”

  23. Tomi Engdahl says:

    Formal commissioning event lauds utility-scale 179 MW solar power plant delivering 100% renewable energy to Switch data centers in Nevada

    The Switch Station 1 and Switch Station 2 solar power plants, with a combined generation capacity of 179 Megawatts, were formally recognized in a commissioning event held in Apex, Nevada last Dec. 11 as fully commissioned and in commercial operation. The celebration was attended by government officials, project owners and the energy offtaker.

    U.S. Senator Harry Reid, U.S Bureau of Land Management Nevada Director, John Ruhs, Clark County Commissioner Chairman Steve Sisolak, Nevada State Energy Office Director Angela Dykema and other federal, state and local leaders joined executives from Switch, EDF Renewable Energy, NV Energy, J.P. Morgan and First Solar, Inc. (NASDAQ: FSLR) to ceremonially “throw the switch” marking the delivery of solar power to Switch data centers in Las Vegas and Reno.

    Following an expedited permitting process, construction of the project took approximately 12 months, creating about 550,000 workhours, and had a total construction workforce of 1,300. Combined, the power plants cover about 1,797 acres and are comprised of 1,980,840 solar panels, the equivalent of 275 football fields, and 5,450,056 feet of cable, equaling 1,032 miles or the distance from Las Vegas to Seattle. The 179 MW of power generates enough clean solar energy to meet the consumption of 46,000 homes, displacing approximately 265,000 metric tons of carbon dioxide (CO2) annually, equal to taking about 52,000 cars off the road.

    “Less than a decade ago, Nevada’s solar energy landscape was nonexistent”

  24. Tomi Engdahl says:

    How ‘wasteful’ data centers can become energy producers

    Turns out recovering “waste” heat from servers in data centers opens the door to innovation and efficiency. Writing in Consulting-Specifying Engineer, Bill Kosik of U.S. Services (Chicago) provides an overview of heat-recovery systems for commercial and institutional buildings, and offers discussion on the issues pertaining to design parameters, efficiency, and opportunities to recycle energy.

    The article discusses heat recovery from data centers, specifically demonstrating how data centers with water-cooled computer equipment will create highly effective energy-recycling processes.

    Can data centers become energy producers?
    Recovering “waste” heat from servers in data centers opens the door to innovation and efficiency.

    Research on heat recovery

    The U.S. Department of Energy (DOE) published a paper on the impacts of using heat recovery in all kinds of applications, primarily in the manufacturing sector. The DOE classified the different processes into different temperature groups: low, 1,200°F. The water temperatures in a data center will fall into the “low” category.

    While it might seem that applications that fall into the “low” category will be less beneficial as compared with the other categories, the DOE reports that applications in the 77⁰ to 300⁰F temperature range represent nearly 80% of the total estimated waste heat. (Temperatures of cooling water exiting water-cooled computers will generally have water temperatures in that will range from 100⁰ to 150⁰F). Based on this, it is possible that HVAC industry will see greater opportunity and develop products and approaches that will make heat recovery in the data center more viable. Also, it is encouraging to know that the average simple payback for 19 sample heat-recovery projects is 1.5 years.

    Types of systems

    There are many different methods of designing heat-recovery systems for commercial and institutional buildings. These methods cover a large variety of building types and HVAC system configurations. But as a starting point, the following examples demonstrate the core principles of energy-recovery systems:

    Runaround coil loop: This moves heat from the exhaust airstream to the supply airstream using two heat exchangers (finned coils) and a heat-transfer fluid, usually water or glycol. A pump is required to move the fluid through the system. A runaround coil loop is best used in situations when the airstreams are not adjacent to each other. If the temperature of the fluid heated by the exhaust air is greater than the outdoor air, energy can be recovered. However, to make this system financially feasible, the warm-water temperature must be considerably greater than the supply-air temperature. And long runs of piping add to material, installation, and energy costs (increased pump power). Generally, the economics of this system are more favorable for facilities located in colder climates that have considerable amounts of constant exhaust air.
    Packaged energy-recovery systems: In commercial and institutional buildings, packaged energy-recovery systems are the simplest and most effective heat-reclaim solution. Many manufacturers offer products that are designed specifically to work with typical air conditioning and ventilation loads for office environments, schools, libraries, etc. One common way of implementing this type of energy-recovery system is to use air handling units with integrated heat wheels, thermosyphons, air-to-air heat exchangers, or evaporatively cooled air-to-air heat exchangers. Each one of these configurations will have different heat-transfer and efficiency characteristics and must be analyzed based on the actual loads, climate, space available, maintenance characteristics, etc.
    Total energy wheel: Most energy-recovery devices transfer heat (sensible) energy only. An enthalpy wheel, or total energy wheel, exchanges both heat (sensible) energy and moisture (latent) energy between the supply and return airstreams. This type of wheel can be coated with a desiccant (moisture-absorbing) material and is rotated between the incoming fresh air and the return air. Heat and moisture in the return air are transferred to the wheel. During cold, dry outdoor conditions, the outside air passes over the rotating wheel, providing pre-conditioning, as the heat and moisture are transferred to the air stream. In the cooling mode, the outside air is precooled and the moisture content is lowered. These processes reduce the amount of energy required by gas-fired heating equipment and compressorized cooling equipment.

    Energy recovery in data centers

    Air-based energy recovery: Data centers that do not use water-cooled computers can still benefit from recovering energy. 75⁰F outdoor air, very little (if any) mechanical cooling energy is needed.

    Water-based heat recovery: Many of the energy-recovery techniques discussed so far (both air- and water-based), are applicable to air handling systems using exhaust/return- and supply-air arrangements. However, depending on the facility type, it also is possible to recover energy from other types of processes that use water to cool the internal components of equipment, then transfer the heat to another process. Historically instead of recovering the heat in these processes, heat has typically been rejected to the outside, losing recoverable energy.

    In data centers, water-cooled computers enable hydronic heat-recovery options. There are several options for cooling computers with water (some of these solutions also work with refrigerant in lieu of water):

    IT cabinets with integral chilled-water fan coil units (side cars).
    Rear-door chilled-water heat exchangers, with or without fans, mounted on the back of the IT cabinet.
    An IT cabinet and servers that have built-in thermal planes. When the servers are installed into the cabinet, the planes contact one-another enabling heat transfer from the server to the circulating cooling water.
    Internal heat sinks mounted directly to the server’s central processing unit (CPU), dual in-line memory modules (DIMM), and the graphics processing unit (GPU). The heat sinks are cooled directly by the water, without additional intermediate heat exchangers. Since the cooling water contacts the sources of heat directly (via heat sinks) the process is very efficient.

  25. Tomi Engdahl says:

    Renewable energy in data centers on the rise: IHS Markit

    IHS Markit’s Maggie Shillington, cloud and data centers analyst, says that data centers are behind between 2% and 3% of developed countries’ electricity consumption, with the electricity required for cooling being the most significant operational cost for most data centers. Many data center operators are turning to renewable energy sources to meet such needs, she says in a new report.

    Shillington notes that although onsite generation in data centers, including wind and solar power, are among the most popular renewable energy methods, offsite renewable energy sources, such as renewable energy suppliers and utility companies, are the most straightforward way to attain renewable energy for data centers. Eliminating the large upfront capital expenses to produce onsite renewable energy, offsite generation removes the geographical limitations of renewable energy production approaches.

  26. Tomi Engdahl says:

    Can the world’s hugest data centers really be made more efficient?

    The gigantic data centers that power the internet consume vast amounts of electricity and emit 3 percent of global CO2 emissions. To change that, data companies need to turn to clean energy sources and dramatically improve energy efficiency. We are often told that the world’s economy is dematerializing – that physical analog stuff is being replaced by digital data, and that this data has minimal ecological footprint. But not so fast. If the global IT industry were a country, only China and the United States would contribute more to climate change, according to a Greenpeace report investigating “the race to build a green internet,” published last year.

    Energy Hogs: Can World’s Huge Data Centers Be Made More Efficient?

    The gigantic data centers that power the internet consume vast amounts of electricity and emit as much CO2 as the airline industry. To change that, data companies need to turn to clean energy sources and dramatically improve energy efficiency.

  27. Tomi Engdahl says:

    What is Your Data Center Doing to Protect the Earth?

    Organizations that prioritize a green data center strategy, from the start, will be the real winners long term – in business and sustainability.

    As Earth Day approaches, rising energy costs and consumption demands for computing are top of mind for the data center industry. Based on current estimates, data centers in the U.S. alone are projected to consume approximately 73 billion kWh in 2020. All the while, artificial intelligence implementations are increasing and these technologies demand higher powered devices to support their massive workloads. Data center efficiency and sustainability is a universal challenge that transcends companies, geographies and workloads – and there’s no simple solution.

    While data center performance demands continue to rise exponentially, data center operational cost budgets have remained flat or shrunk. This leads to a need for packing more performance into a fixed power budget. So what can we do to maximize budgets while minimizing the environmental impact?

    Be Conscious Purchasers and Producers

    The United Nations estimates that there will be a global e-waste output of 50 million metric tons in 2018. Imagine all the old server racks, wires and fans that become waste. Doing more for the environment requires planning. I believe organizations that prioritize a green data center strategy, from the start, will be the real winners long term – in business and sustainability.

    Development teams should use recycled plastics in production whenever possible – for example, when developing the air ducts we use on servers – and should have repair, reuse and recycling centers to minimize our industry’s impact on the environment.

    Sustainable Operations Begin with Sustainable Design

    Heat is a major challenge we face in the industry. Therefore, improved cooling efficiency is essential for ensuring the energy consumption of the data center removes system heat. High performance does not always mean more power. Cooling technology is one example that achieves higher performance with lower power. Water cooling, for instance, can reduce data center energy costs by 40 percent. Direct water cooling design removes up to 90 percent of system heat from the rack, keeping processors up to 20°C cooler. This enables processors in those systems to continually run in “turbo” mode, greatly increasing system performance. The workload demand of existing data centers keeps growing with a need for more density. This increases cooling costs; so we as an industry need to make denser products that take up less space.

    Additionally, there must be a low power burden on system fans and moderate acoustics. Conventional computing environments are designed to support racks that need ~10-15kW of power. Those racks can generally be cooled with conventional air cooling techniques. What we’re seeing with data intensive machine learning computing, and dense high performance computing, are racks that require ~30kW (and higher). In many instances, this level of power requires some form of liquid assistance to cool.

  28. Tomi Engdahl says:

    Google endorses Clean Power Plan ahead of expected repeal

    Google has joined Apple in a growing chorus of tech giants coming out in support of the Clean Power Plan. The company filed a statement with the Environmental Protection Agency, which it has since shared with TechCrunch, supporting the Obama-era legislation

    The legislation, which sought to curb power plant emissions by more than 30 percent by 2030, is expected to be repealed by the Trump administration. As with Apple’s earlier filing, Google cites both environmental and economic fallout, should the policy be repealed.

    “Wind and solar deployment—as well as the associated supply chains—have been among the fastest-growing sectors of the U.S. economy in recent years,”

  29. Tomi Engdahl says:

    Cryptocurrency’s Estimated Draw on World Resources Could Power Bangladesh

    Is the phenomenon of cryptocurrency sustainable? A number of factors come into play here, but as it expands, power consumption and requisite cost become the main issues.

    When it comes to mining cryptocurrency and how much power is consumed during mining operations globally, nobody knows for sure. However, based on varied utility costs alone, it’s estimated at 53.99 TWh, or enough energy to power the country of Bangladesh annually, according to the Digiconomist Bitcoin Energy Consumption Index—a website dedicated to providing in-depth analysis regarding cryptocurrencies.

    Cryptocurrencies are digital assets (currencies) designed for use as a medium of exchange in the same fashion as traditional assets or money. However, they use cryptography to secure transactions as well as control the creation of additional units and the verification of transfer assets. Most, such as Bitcoin and other altcoins, are decentralized, with control mitigated through blockchains. Blockchains are continuously growing lists of records that form blocks (i.e., ledger) containing a cryptographic hash (mathematical algorithm), which makes it tough to alter the data and therefore makes the digital currency secure.

  30. Tomi Engdahl says:

    Power Stamps Poised to Boost Data-Center Productivity

    Next-generation data centers will employ processors, memory, and supporting circuits that require more power for its servers. Enter the Power Stamp Alliance’s proposed higher-power modules.

    Power Stamps are 48-V direct dc-dc conversion-to-POL modules that can provide the higher power density required for future data centers. The new Power Stamp Alliance (PSA) specifies a standard Power Stamp footprint and functions that deliver multiple-sourced, standard modular board-mounted solutions for data centers (Fig. 1). The Founding Members of the Power Stamp Alliance are Artesyn Embedded Technologies, Bel Power Solutions, Flex, and STMicroelectronics.

    These Power Stamps primarily target high-performance computers and servers being used in large data centers, many of which follow the principles of the Open Compute Project (OCP). OCP’s mission is to design and enable the delivery of the most efficient server, storage, and data-center hardware designs for scalable computing.

    The first processor architectures addressed by the Power Stamp Alliance include the Intel VR13 Skylake CPUs, Intel VR13-HC Ice Lake CPUs, DDR4 memories, IBM POWER9 (P9) processors, and high-current ASIC and/or FPGA chipsets supporting the SVI or AVS protocols.

    Serial VID Interface (SVI) is a two-wire (clock and data) bus that connects a single master (processor) to one or more slaves (voltage regulators). Adaptive voltage scaling (AVS) is a closed-loop, dynamic-power-minimization technique that reduces power based on the actual operating conditions of the chip; i.e., the power consumption is continuously adjusted during the run time of the chip.

    This Alliance will mean there will be no single source for modules that combine DOSA (Distributed power Open Standards Alliance) and POLA (Point of Load Alliance) standards. The PSA is similar to both DOSA and POLA. DOSA products share common mechanical pinouts and footprints, and POLA products share common silicon.

  31. Tomi Engdahl says:

    Our phones and gadgets are now endangering the planet

    The energy used in our digital consumption is set to have a bigger impact on global warming than the entire aviation industry

    About 70% of the world’s online traffic is reckoned to pass through Loudoun County.

    the county is the home of data centres used by about 3,000 tech companies

    According to a 2017 Greenpeace report, only 1% of Dominion’s total electricity comes from credibly renewable sources: 2% originates in hydroelectric plants, and the rest is split evenly between coal, gas and nuclear power.

    a study in Japan that suggests that by 2030, the power requirements of digital services will outstrip the nation’s entire current generation capacity. He quotes an American report from 2013 – ironically enough, commissioned by coal industry lobbyists – that pointed out that using either a tablet or smartphone to wirelessly watch an hour of video a week used roughly the same amount of electricity (largely consumed at the data-centre end of the process) as two new domestic fridges.

    data centres are set to soon have a bigger carbon footprint than the entire aviation industry.

    online currency Bitcoin – which, at the height of the speculative frenzies earlier this year, was set to produce an annual amount of carbon dioxide equivalent to 1m transatlantic flights.

    And he’s anxious about what will happen next: “In response to vast increases in data storage and computational capacity in the last decade, the amount of energy used by data centres has doubled every four years, and is expected to triple in the next 10 years.”

    These changes are partly being driven by the so-called internet of things: the increasing array of everyday devices – from TVs, through domestic security devices, to lighting systems, and countless modes of transport – that constantly emit and receive data.

    But some good news. Whatever its other ethical contortions, Silicon Valley has an environmental conscience. Facebook has pledged to, sooner or later, power its operations using “100% clean and renewable energy”. Google says it has already achieved that goal. So does Apple

    And among the big tech corporations, there is one big focus of worry: Amazon
    details of AWS’s electricity consumption and its carbon footprint remain under wraps

    “Among emerging Chinese internet giants such as Baidu, Tencent and Alibaba, the silence on energy performance still remains. Neither the public nor customers are able to obtain any information about their electricity use and CO2 target.”

    projections that the entire communication technology industry could account for up to 14% of carbon emissions by 2040, one stark fact remains: the vast majority of electricity used in the world’s data centres comes from non-renewable sources

    when the smartphone in your pocket starts to suddenly heat up: a metaphor for our warming planet

  32. Tomi Engdahl says:

    Data Center Power Poised To Rise

    Shift to cloud model has kept power consumption in check, but that benefit may have run its course.

    The big power-saving effort that kept U.S. data-center power consumption low for the past decade may not keep the lid on much longer.

    Faced with the possibility that data centers would consume a disastrously large percentage of the world’s power supply, data center owners, and players in the computer, semiconductor, power and cooling industries ramped up effort to improve the efficiency of every aspect of data-center technology. The collective effort was so successful that overall data-center energy consumption rose from 1.5% of all power used in the U.S. in 2007 to just 1.8% in 2016, despite enormous growth in the number of data centers, servers, users and devices involved, according to a 2016 report from the U.S. Dept. of Energy’s Lawrence Berkeley National Laboratory (LBNL).

  33. Tomi Engdahl says:

    Tackling data center energy use

    ASHRAE Standard 90.4: Energy Standard for Data Centers guides engineers in designing mechanical and electrical systems in data centers.

    In 1999, a prescient article authored by Forbes columnist Peter Huber titled “Dig More Coal—the PCs Are Coming” focused on energy use attributable to the internet. For one of the first times, the topic of data center energy use was introduced in a major mainstream publication.

    Nearly 20 years later, the industry is still on a quest, diligently working on tactics and strategies to curb energy use while maintaining data center performance and reliability. During this time, dozens of cooling and power system designs have been developed to shrink electricity bills from energy efficiency improvements. Building on these advances, manufacturers are now producing equipment, purpose-built specifically for use in the data center. For example, HVAC engineers were mostly limited to designing around standard commercial cooling equipment, generally not able to meet the demands of a critical facility.

    As the industry matured (domestically and globally), efforts ramped up to reduce energy use in data centers and other technology-intensive facilities. These programs, while different in scope and detail, all had a similar goal: develop noncompulsory, consensus-driven best practices and guidelines on optimizing data center energy efficiency and resource use. It was truly a watershed moment as these programs manifested into actual design and reference documentation, providing vital access to data on design and operations; prior to this time, finding consistent, verifiable instruction on how to improve data center energy efficiency was not easy. Today, worldwide, these documents are numerous and come from diverse sources.

    When organizations such as ASHRAE are developing official standards, the current state of the industry is certainly taken into consideration in an attempt to avoid releasing language that is overly stringent (or too lax), possibly resulting in unintended outcomes.

    Tackling energy use: Looking ahead

    As hyperscale data centers and cloud computing flourishes, energy efficiency and operating costs for data centers continues to be a fundamental concern to owners and operators. For example, Cisco Cloud Index provides analysis of cloud and data center platforms and predicts that, by 2020, traffic within hyperscale data centers will quintuple, representing 53% of all data center traffic.

    Hyperscale data centers are designed to be highly scalable, homogenous, and highly virtualized. These data centers also tend to have elevated computer equipment densities (measured in watts per square foot or kilowatts per server cabinet) and above-average overall power demands (measured in megawatts). This is not just for new data centers-existing data centers, retiring end-of-life servers, storage, and networking equipment can often end up with a net-positive electrical load, requiring greater amounts of power as compared with the computer systems prior to upgrade.

    Part of this has to do with being able to fit more hardware in the place of the displaced equipment. So even if the individual server, as an example, has a smaller power demand than its predecessor, the total load will increase due to the increased number of servers.

    The phrase “next-generation computing” may evoke a feeling of massively powerful, autonomous computers. That isn’t too far from reality, especially when talking about supercomputers and other high-performance computing systems. These computing platforms are set apart from other systems by the ability to solve extremely complex and data-intensive problems.

    An interesting aspect of these associations is that the membership will typically have varied reasons for wanting to develop energy efficiency goals. The diverse mix of participants encouraged debate and discussion, which is a big reason why much of the material published was well-received and is still relevant many years later. Also, the organizations did not operate under the same rules: some were top-down (like the federal government entities) and some were bottom-up (like engineering and professional societies).

    One of these organizations, The Green Grid (TGG), is at the forefront of promoting data center efficiency. TGG also has a diverse membership base, so the information generated by TGG consists of varied topics, applicable to different disciplines, but all still centered around data center efficiency.

    TGG released the seminal white paper in 20017, “Green Grid Metrics: Describing Datacenter Power Efficiency.” This paper formally introduced power-use efficiency (PUE), described as a short-term metric to determine data center power efficiency, derived from facility power measurements

    PUE = (total facility power)/(IT equipment power)

  34. Tomi Engdahl says:

    Data center power efficiency increases, but so do power outages

    An Uptime Institute survey finds the power usage effectiveness of data centers is better than ever. However, power outages have increased significantly.

    A survey from the Uptime Institute found that while data centers are getting better at managing power than ever before,

    It found that the power usage effectiveness (PUE) of data centers has hit an all-time low of 1.58. By way of contrast, the average PUE in 2007 was 2.5, then dropped to 1.98 in 2011, and to 1.65 in the 2013 survey.

    A PUE of 1.5 means for every watt into the IT systems, a half of a watt is needed for cooling. So, lowering PUE is something of an obsession among data center operators.

    However, Uptime also found a negative trend: The number of infrastructure outages and “severe service degradation” incidents increased to 31 percent of those surveyed, that’s up 6 percentage points over last year’s 25 percent. Over the past three years, nearly half had experienced an outage at their own site or a service provider’s site.

    This begs the question: Is one causing the other? Is the obsession with lower PUE somehow causing more and bigger outages? Rhonda Ascierto, vice president of research with the Uptime Institute, says no.

    “We can’t determine that,”

    Most downtime incidents lasted one to four hours.

    Half of those who did make an estimate put the cost were less than $100,000, but 3 percent said costs were over $10 million.

    What causes data center outages?

    The leading causes of data center outages are power outages (33 percent), network failures (30 percent), IT staff or software errors (28 percent), on-premises non-power failure (12 percent), and third-party service provider outages (31 percent).

    To err is human, and this survey showed it. Nearly 80 percent said their most recent outage could have been prevented. And that human error extends to management decisions, Ascierto said.

    “Oftentimes, people talk about human error being the cause of outages, but it can include management errors, like poorly maintained or derated equipment that may not match runtime requirements,” she said. “The human error comes down to management responsibility.”

    Uptime found 24 percent of those surveyed said they were impacted by outages across multiple data centers.

    2018 Data Center Industry Survey Results

  35. Tomi Engdahl says:

    Digitalisaation accounts for about 10 percent of the energy consumption of modern countries

    “Digitalisaation osuus modernien valtioiden energiankulutuksesta on jo noin 10 prosenttia”


  36. Tomi Engdahl says:

    Google just put an AI in charge of keeping its data centers cool

    DeepMind’s neural networks will tweak data center conditions to cut power usage.

    Google is putting an artificial.intelligence system in charge of its data center cooling after the system proved it could cut energy use.

    Now Google and its AI company DeepMind are taking the project further; instead of recommendations being implemented by human staff, the AI system is directly controlling cooling in the data centers that run services including Google Search, Gmail and YouTube.

    “This first-of-its-kind cloud-based control system is now safely delivering energy savings in multiple Google data centers,” Google said.

    Safety-first AI for autonomous data center cooling and industrial control

  37. Tomi Engdahl says:

    Pi Projector by MickMake | The Raspberry Pi Zero Pocket Projector

    IT Industry’s responsibility towards a Greener Earth.

  38. Tomi Engdahl says:

    Chapter 25: Data-Center Power Management

    Why More-Than-Moore Power Management Is Required to Keep Up With Exponential Growth in ICT Data Consumption
    Significant gains in energy efficiency are required to keep up with the exponential growth in the data consumption of Information and Communications Technology (ICT) systems: end-user devices, networks, and data centers. Moore’s Law scaling (monolithic integration in silicon) is the historical technology driver, but it no longer achieves the required gains.

    This technology can improve voltage regulator power density, response time and granularity by an order-of-magnitude to reduce the ICT system energy consumption by 30% or more. This paper explains why a Heterogeneously Integrated Power Stage (HIPS) enables power management scaling to keep up with the rising demands on data centers and network systems.

    More-than-Moore scaling by integrating different components and materials to increase functional diversity and parallelism.

    Exponential Growth in Data Consumption
    The digital universe—the data we create and copy annually—will grow 10x, from 4.4 zettabytes to 44ZB, from 2013 to 2020. The forecast for 2020 compared to 2014 expects many more Internet users (3.9 billion versus 2.8 billion), more connected devices (24.4 billion versus 14.2 billion), faster average broadband speeds (42.5Mbps versus 20.3Mbps) and more video streaming (80% versus 67% of traffic). Most people will use a tablet or smartphone for all online activities by 2018. Mobile data traffic is increasing 10x from 3.3 exabytes per month in 2014 to 30.5EB/month in 2020. Many websites (e.g., Facebook) require an order of magnitude more power to build a web page than to deliver it.

    ICT Energy Consumption

    According to one study, ICT systems in 2012 consumed 920 terawatt-hours of power, which is 4.7% of global electricity consumption. That power requires the equivalent of 300 coal plants and emits 2 trillion pounds of CO2-equivalent greenhouse-gas emissions.

    A second study forecasts that improvements in energy efficiency will slow the growth in ICT electricity consumption from the historical 6.7% per year to 3.8% per year

    Energy efficiency is the new fundamental limiter of processor performance. Increasing it requires:

    • Large-scale parallelism with discrete dynamic voltage and frequency scaling (DVFS)

    • Near-threshold voltage operation

    • Proactive fine-grain power and energy management.

    • Response time: To quickly change the processor’s operating voltage, which typically varies from 1.0–1.5V for peak performance to 0.3–0.6V for low-power idling from a 12V supply used in data-center and network systems.

    • Granularity: To support the massive parallelism of many small energy-efficient elements (e.g., many heterogeneous processor cores, micro-server cards, small-cell base stations, etc.).

    The HIPS module uses the optimum technology for each function:

    • Gallium arsenide (GaAs) for the field-effect transistors (FETs).

    • CMOS for drivers, protection, and control, handling the GaAs FETs’ unique requirements.

    • 3D packaging using embedded die-in-substrate technology to integrate in a 5mm x 5mm x 1mm QFN package the GaAs die, CMOS driver die, and passive components required to minimize parasitics for the high switching frequency

    An HIPS module is an evolutionary leap over the Driver-MOSFET (DrMOS) integrated power stage module. It replaces the MOSFET dies with a GaAs die, reducing packaging parasitics and integrating performance-critical components in a very small package. GaAs FETs have much lower switching-power loss than MOSFETs

  39. Tomi Engdahl says:

    Your Downloadable Games May Be Worse for the Environment Than Game Discs
    No wonder those birds are so angry.

    Instead of churning out millions of discs to hold video games, publishers are moving towards digital distribution to lower costs and cut down on waste. If you’re not making millions of physical objects that will eventually be thrown away, that’s good for the environment, right? Wrong, says a new study: It may mean less trash, but digital distribution sometimes means more energy use and air pollution.

    According to a study published in the Journal of Industrial Ecology, production, sale, and digital distribution of games under 1.3 GB in size produce less carbon emission than disc-based games, but for a standard 8.8 GB blu-ray game, roughly 20.82 kg of carbon dioxide goes into the life of a disc, but up to 27.53 kg could be generated by a digitally distributed copy.

    On the other hand, if you’re driving to the store just to pick up a video game and drive home again, you’re not spreading that carbon footprint around to other purchases, and the pollution difference of digital games and physical ones becomes “too close to call” according to the study.

    This could all change over the years as Internet technology evolves, but right now, for all their convenience, large digitally downloaded games may not actually cut down on waste but just move it into the air instead.

  40. Tomi Engdahl says:

    Video: Top Operational and Energy Saving Trends for Data Center Cooling

    This presentation will highlight the advances made in critical infrastructure technologies for chillers and cooling plants, AHUs, and modular approaches to achieve significant operating and energy expense savings.

    Top Operational and Energy Saving Trends for Data Center Cooling

    Data center operators historically focused on IT infrastructure and management systems to lower CAPEX and OPEX while meeting SLAs for scalability and time-to-market. Operators are now turning to critical infrastructure technologies to potentially extend these gains further. This presentation will highlight the advances made in critical infrastructure technologies for chillers and cooling plants, AHUs, and modular approaches to achieve significant operating and energy expense savings.

  41. Tomi Engdahl says:

    Tieto- ja viestintätekniikka kuluttaa globaalisesti nykyään 8 prosentista kaikesta sähköstä ja määrä kaksinkertaistuu joka vuosi.

  42. Tomi Engdahl says:

    A Cooler Cloud: A Clever Conduit Cuts Data Centers’ Cooling Needs by 90 Percent

    Data centers are hungry, hot, and thirsty. The approximately 3 million data centers in the United States consume billions of liters of water and about 70 billion kilowatt-hours of electricity per year, or nearly 2 percent of the nation’s total electricity use. About 40 percent of that energy runs air conditioners, chillers, server fans, and other equipment to keep computer chips cool.

    Now, Forced Physics, a company based in Scottsdale, Ariz., has developed a low-power system that it says could slash a data center’s energy requirements for cooling by 90 percent. The company’s JouleForce conductor is a passive system that uses ambient, filtered, nonrefrigerated air to whisk heat away from computer chips.

    The computer equipment in a typical data center runs at about 15 megawatts, devoting 1 MW of that power to server fans. But such a data center would require an additional 7 MW (for a total load of 22 MW) to power other cooling equipment, and it would need 500 million liters of water per year.

    According to Forced Physics’ chief technology officer, David Binger, the company’s conductor can help a typical data center eliminate its need for water or refrigerants and shrink its 22-MW load by 7.72 MW

    The hotter the air is as it exits the conductor, the better. In dozens of lab tests with ambient air temperatures between 21 °C and 49 °C, the air exiting the JouleForce conductor measured around 65 °C—which is 27 °C hotter than with conventional cooling systems.

    “It’s very efficient,”

  43. Tomi Engdahl says:

    Online Porn Pumps Out As Much Carbon Dioxide As A Small Industrial Country

    Video streaming accounts for around 60 percent of all data flow online, which means it also accounts for over 300 million tons of carbon dioxide emissions per year. Since almost a third of streamed video content is pornography, online porn pumps out around 100 million tons of carbon dioxide each year, more than the annual output of Israel.


Leave a Comment

Your email address will not be published. Required fields are marked *