How Clean is Your Cloud and Telecom?

Greenpeace report How Clean is Your Cloud? I saw mentioned in 3T magazine news is actually quite interesting reading. This year’s report provides a look at the energy choices some of the largest and fastest growing IT companies. The report analyzes the 14 IT companies and the electricity supply chain in more than 80 data center cases.


The report contains also lots of interesting background information on both IT and telecom energy consumption. I recommend checking it out. Here are some points picked from How Clean is Your Cloud? report:

Facebook, Amazon, Apple, Microsoft, Google, and Yahoo – these global brands and a host of other IT companies are rapidly and fundamentally transforming the way in which we work, communicate, watch movies or TV, listen to music, and share pictures through “the cloud.”

The growth and scale of investment in the cloud is truly mind-blowing, with estimates of a 50-fold increase in the amount of digital information by 2020 and nearly half a trillion in investment in the coming year, all to create and feed our desire for ubiquitous access to infinite information from our computers, phones and other mobile devices, instantly.

The engine that drives the cloud is the data center. Data centers are the factories of the 21st century information age, containing thousands of computers that store and manage our rapidly growing collection of data for consumption at a moment’s notice. Given the energy-intensive nature of maintaining the cloud, access to significant amounts of electricity is a key factor in decisions about where to build these data centers. Industry leaders estimate nearly $450bn US dollars is being spent annually on new data center space.

Since electricity plays a critical role in the cost structure of companies that use the cloud, there have been dramatic strides made in improving the energy efficiency design of the facilities and the thousands of computers that go inside. However, despite significant improvements in efficiency, the exponential growth in cloud computing far outstrips these energy savings.

How much energy is required to power the ever-expanding online world? What percentage of global greenhouse gas (GHG) emissions is attributable to the IT sector? Answers to these questions are very difficult to obtain with any degree of precision, partially due to the sector’s explosive growth, a wide range of devices and energy sources, and rapidly changing technology and business models. The estimates of the IT sector’s carbon footprint performed to date have varied widely in their methodology and scope. One of the most recognized estimates of the IT sector’s footprint was conducted as part of the 2008 SMART 2020 study, which established that the sector is responsible for 2% of global GHG emissions.

The combined electricity demand of the internet/cloud (data centers and telecommunications network) globally in 2007 was approximately 623bn kWh (if the cloud were a country, it would have the fifth largest electricity demand in the world). Based on current projections, the demand for electricity will more than triple to 1,973bn kWh (an amount greater than combined total demand of France, Germany, Canada and Brazil).

The report indicates that, due to the economic downturn and continued energy efficiency and performance improvements, global energy demand from data centers from 2005-2010 increased by 56%. Estimates of data center electricity demand come in at 31GW globally, with an increase of 19% in 2012 alone. At the same time global electricity consumption is otherwise essentially flat due to the global recession is still a staggering rate of growth.

Given the scale of predicted growth, the source of electricity must be factored into a meaningful definition of “green IT”. Energy efficiency alone will, at best, slow the growth of the sector’s footprint. The replacement of dirty sources of electricity with clean renewable sources is still the crucial missing link in the sector’s sustainability efforts according to the report.


The global telecoms sector is also growing rapidly. Rapid growth in use of smart phones and broadband mobile connections mean mobile data traffic in 2011 was eight times the size of the entire internet in 2000. It is estimated that global mobile data traffic grew 133% in 2011, with 597 petabytes of data sent by mobiles every month. In 2011, it is estimated that 6 billion people or 86.7% of the entire global population have mobile telephone subscriptions. By the end of 2012, the number of mobile connected devices is expected to exceed the global population. Electronic devices and the rapidly growing cloud that supports our demand for greater online access are clearly a significant force in driving global energy demand.

What about telecoms in the developing and newly industrialized countries? The report has some details from India (by the way it is expected that India will pass China to become the world’s largest mobile market in terms of subscriptions in 2012). Much of the growth in the Indian telecom sector is from India’s rural and semi-urban areas. By 2012, India is likely to have 200 million rural telecom connections at a penetration rate of 25%. Out of the existing 400,000 mobile towers, over 70% exist in rural and semi-urban areas where either grid-connected electricity is not available or the electricity supply is irregular. As a result, mobile towers and, increasingly, grid-connected towers in these areas rely on diesel generators to power their network operations. The consumption of diesel by the telecoms sector currently stands at a staggering 3bn liters annually, second only to the railways in India.

What is the case on other developing and newly industrialized countries? I don’t actually know.

NOTE: Please note that that many figures given on the report are just estimates based on quite little actual data, so they might be somewhat off the actual figures. Given the source of the report I would quess that if the figures are off, they are most probably off to direction so that the environmental effect looks bigger than it actually is.


  1. Tomi Engdahl says:

    48V direct-conversion dramatically improves data-center energy efficiency

    It’s easy to summarize the power needs and costs of data centers and servers in a single word: enormous. Of course, there’s much more to the story than this. These critical network hubs – which are now woven deeply into society’s infrastructure – require megawatts to function, resulting in very high power-related direct-operating costs. Those costs are further extended by the costs associated with dissipating all the associated heat the equipment generates.

    Consider a representative 5000 ft2 (1500 m2) server/data center. It uses about 1 MW, with a power usage efficiency (PUE) rating between 1.2 and 2

    These PUE numbers means that non-core losses range from about 20% to 100% above the basic operating requirements. The higher the PUE, the higher the total cost of ownership (TCO), and depending how it is defined, PUE may not even directly account for the costs of getting rid of all the power that is wasted and transformed into heat, and somehow must be removed. A lower PUE also directly affects associated CO2 emissions and carbon footprint, and so has regulatory implications.

    The challenge in reducing PUE is that there is no dominant source of loss in the server or data center. Instead, the losses are spread along the entire power-distribution chain, starting with the primary power source AC supply, down to the low-voltage DC which is supplied to individual ICs

    cumulative sources of inefficiency as power passes from line mains through 48VDC/12VDC converters and multiple 12V DC/single-digit rail supplies.

    Losses add up quickly

    Simple math shows the impact of cumulative losses along the power path. Assume there are four stages between the 480 VAC/DC mains and the ultimate low-voltage rails, each with efficiency of 90% (actual numbers will vary for each stage, of course). The end-to-end efficiency is the product of these individual efficiencies, and drops down to just 65.6% – a substantial loss.

    What can be done? The “obvious” answer is to improve the efficiency of each stage, and that has been the dominant strategy. If each of those five 90% ratings can be boosted to 92%, the overall efficiency will increase to about 71.6%.

    A system which is 90% efficient is clearly 10% inefficient. Even a 1% improvement is a huge gain:

    From AC mains/400VDC → 48VDC → 12VDC PoL → single-volt rails

    Historically, the power path has used an intermediate voltage of 48VDC, which then feeds numerous 12V point of load (PoL) DC/DC converters that produce the specific end-use rail voltages, such a 12V, 5V, 3.3V, 1.2V, and even sub-1V. This topology worked well, and improvements in efficiency in the intermediate converter stages and the PoL units made it a successful approach which has lasted for many years.

    Direct conversion offers better approach

    Fortunately, a new approach called direct conversion offers a path out of the dilemma. If you completely eliminate one of the power-conversion stages, such as the 48V/12V intermediate stage, and instead go directly from 48V DC to the low-voltage rails, the impact would be both significant. Looking at the four-stage 90% example again, going to just three 90% stages improves efficient from 65.6% to 72.9%

    There’s another very good reason to skip the 12V intermediate stage: the bus bar behind the rack brings hundreds of amperes to the server boards at 12V. The associated losses, which are already high, are becoming even more significant as these current levels continue to increase. Increasing the distribution voltage to 48V greatly reduces these bus-bar distribution losses. Using 48V as the distribution voltage is a reasonable compromise between the need to decrease the losses and the safety regulations which begin at 60V. Also, 48V distribution is compatible with distributed uninterruptible power supplies (UPS) where the energy storage unit (typically a 42-to-48V battery) is located close to the rack, rather than at a centralized UPS sited far from the equipment.

    Of course, it is easy to propose direct conversion; it is actually hard to execute. Several manufacturers have devised “partial” solutions.

    ST’s three-IC solution embodies advanced concepts

    To allow power-system architects to realize the benefits of direct conversion, ST developed a multi-IC solution with what is called Isolated Resonant Direct Conversion technology, along with the critical infrastructure which supports it.

    There’s no question that the existing multistage power-conversion chain has worked well, but its time has come to a close. It’s no longer sufficient for the task of meeting the efficiency needs and growing server/data center power demands. Further, it cannot meet the VR13 specification, lacks scalability and flexibility, and is not highly efficient across all load ranges.

    That’s why the multichip direct-conversion solution developed by STMicroelectronics, featuring power conversion from 48VDC directly down to the individual IC rail voltages, is a better solution.

  2. Tomi Engdahl says:

    Got Energy?

    Why everyone needs to start taking power more seriously, and what you can do about it.

    Energy is a finite resource, which means it’s not someone else’s problem. It’s everyone’s problem.

    This isn’t just another doom and gloom prediction. Energy consumption has been rising steadily for decades. Unfortunately, it has been increasing at a faster rate than energy production. A Semiconductor Industry Association report entitled, “Rebooting the IT Revolution: A Call to Action,” says we could run out of energy to power computers by 2040.

    So what can we do about it? There are ways to save power significant amounts of energy at the system component and sub-system level.

    keeping computers running takes the equivalent of 30 large power plants. The real problem, according to the report, is that power is wasted when computers sit idle—particularly the ones that are plugged into the wall. The group argues that implementing new standards could save U.S. consumers $3 billion a year.

    It’s not just computers, though. All electronics can benefit from better energy management. Just as cars idling in traffic burn fuel, so do electronics. And as more devices are added, particularly those that are always on, the more energy will be wasted.

    So where do you stand on power?

  3. Tomi Engdahl says:

    Power/Performance Bits: Oct. 11

    Data center on chip

    Researchers from Washington State University and Carnegie Mellon University presented a preliminary design for a wireless data-center-on-a-chip at the Embedded Systems Week conference in Pittsburgh.

    Data centers are well known as energy hogs, and they consumed about 91 billion kilowatt-hours of electricity in the U.S. in 2013, which is equivalent to the output of 34 large, coal-fired power plants, according to the National Resources Defense Council. One of their major performance limitations stems from the multi-hop nature of data exchange.

    In recent years, the group designed a wireless network on a computer chip

    The new work expands these capabilities for a wireless data-center-on-a-chip. In particular, the researchers are moving from two-dimensional chips to a highly integrated, three-dimensional, wireless chip at the nano- and microscales that can move data more quickly and efficiently.

    The team believes they will be able to run big data applications on their wireless system three times more efficiently than the best data center servers.

    Wireless data-center-on-a-chip aims to cut energy use

    Personal cloud computing possibilities

    As part of their grant, the researchers will evaluate the wireless data center to increase energy efficiency while also maintaining fast, on-chip communications. The tiny chips, consisting of thousands of cores, could run data-intensive applications orders of magnitude more efficiently compared to existing platforms. Their design has the potential to achieve a comparable level of performance as a conventional data center using much less space and power.

    It could someday enable personal cloud computing possibilities, said Pande, adding that the effort would require massive integration and significant innovation at multiple levels.

    “This is a new direction in networked system design,’’ he said. “This project is redefining the foundation of on-chip communication.”

  4. Tomi Engdahl says:

    California Computer Efficiency Standard Nears Finish Line

    Is California ready to approve the nation’s first mandatory efficiency standard for computers, monitors and signage displays?

    It appears so. Earlier this year, I wrote about the California Energy Commission’s (CEC) initial proposed requirements (California Continues Drive for Computer and Display Efficiency). After a good deal of discussion with industry stakeholders, resulting in some specification and timing modifications, the CEC has now published their Efficiency Rulemaking Express Terms, which they hope will to be the final regulatory language approved by year end. The Commission believes that consumers and businesses will save over $370 million in energy costs from these standards.

    Covered in this computer regulation are desktops, notebooks (including mobile gaming systems), thin-clients, small-scale servers, and workstations. Excluded are tablets, smartphones, game consoles, handheld gaming devices, servers other than small-scale units, and industrial computers. The CEC believes that the core opportunity for computer energy savings is in limiting the unit’s energy consumption during non-productive idle, standby, and off modes.

    A computer’s maximum allowable annual energy consumption is determined partly by its expandability score (ES). ES is used to correlate the power supply sizing necessary for a computer to provide the required power to the core system plus any potential expansions

  5. Tomi Engdahl says:

    Hyperscale data centers make commitments to renewable energy

    An often-cited nine-year-old report from the United States Environmental Protection Agency (EPA) estimated that data centers accounted for approximately 1.5 percent of the country’s total electricity consumption. The EPA’s “Report to Congress on Server and Data Center Efficiency,” published in August 2007, was written in response to the U.S. Congress’s Public Law 109-431, which requested such a report. The EPA said the 133-page document “assesses current trends in energy use and energy costs of data centers and servers in the U.S. and outlines existing and emerging opportunities for improved energy efficiency.”

    Based on data gathered through 2006, the report states, “The energy used by the nation’s servers and data centers is significant. It is estimated that this sector consumed about 61 billion kilowatt-hours (kWh) in 2006 (1.5 percent of total U.S. electricity consumption) for a total electricity cost of about $4.5 billion. This estimated level of electricity consumption is more than the electricity consumed by approximately 5.8 million average U.S. households (or about five percent of the total U.S. housing stock).”

    The report detailed practices that could be taken to avoid a continued escalation of electricity consumption by data centers. In a practical sense, the report also served to kick off the EPA’s EnergyStar program for data center facilities and, later, for data center equipment. From a public relations standpoint, the report cast data centers as energy hogs. In doing so, it branded the largest data centers-frequently referred to as hyperscale data centers-as hyperconsumers of electricity.

    “When it comes to sustainability, we’ve made important progress as a company since the start of this decade, but even more important work lies ahead … We need to keep working on a sustained basis to build and operate greener data centers that will serve the world well. For Microsoft, this means moving beyond data centers that are already 100-percent carbon neutral to also having those data centers rely on a larger percentage of wind, solar and hydropower electricity over time. Today roughly 44 percent of the electricity used by our data centers comes from these sources. Our goal is to pass the 50-percent milestone by the end of 2018, top 60 percent early in the next decade, and then to keep improving from there.”

    Another hyperscale data center owner, Google, similarly addressed its use of renewable energy and contractual agreements that help it drive down its carbon footprint. On its website the company explains, “Across Google, we’re currently using renewable energy to power over 35 percent of our operations. We’re committed to using renewable energy like wind and solar as much as possible. So why don’t we build clean energy sources right on our data centers? Unfortunately, the places with the best renewable power potential are generally not the same places where a data center can most efficiently and reliably serve its users. While our data centers operate 24/7, most renewable energy sources don’t-yet. So we need to plug into the electricity grid, and the grid isn’t currently very green. That’s why we’re working to green the electricity supply as a whole-not just for us, but for everyone.

    Facebook-another often-cited owner/operator of hyperscale data centers-recently announced some renewable-energy initiatives associated with its under-construction facility in Clonee, County Meath, Ireland.

  6. Tomi Engdahl says:

    Modular data center design helps Oracle achieve sustainability goals
    The project’s design-collaboration approach was highly successful.

    With 2014 revenue at $38 billion and more than 130,000 employees companywide, Oracle has plans. Big plans. These included reaching their internal information technology (IT) growth targets for 2014 with a forward-looking plan to support IT growth needs for the next several years. Accordingly, Oracle moved forward with the build-out of the UCF Phase 2 Data Center space in West Jordan, Utah. The Glumac-led design team master-planned the UCF Phase 2 space to provide for 30,000 sq ft of highly reliable and highly available Tier III space.

    With more than 2 decades of experience implementing sustainable energy-, water-, and waste-management practices, Oracle has set some high sustainability goals for themselves. By 2016, Oracle targets a 10% reduction in energy use per employee and 6% improvement in power-usage effectiveness (PUE) in production data centers. The Oracle Utah Compute Facility Cell 2.1 in West Jordan is one part of this pursuit.

    The results were clear. The Oracle UCF Cell 2.1 Phase 2 project was completed on time and on budget, meeting the user’s requirements. The PUE of the space was calculated to be 1.22, which is a 24% reduction as compared with Utah industry averages of 1.60.

    Further, the low PUE will help Oracle’s original sustainability goals of achieving 6% reduction in data center PUE and will save $250,000/year as compared with a similar facility built to Utah energy code standards.

    The Oracle Utah Compute Facility Cell 2.1 demonstrates that data centers can be both sustainable and reliable.

  7. Tomi Engdahl says:

    Utilizing GaN transistors in 48V communications DC-DC converter design

    As the world’s demand for data increases seemingly out of control, a real problem occurs in the data communications systems that have to handle this traffic. Datacenters and base stations, filled with communications processing and storage handling, have already stretched their power infrastructure, cooling, and energy storage to their limits. However, as the data traffic continues to grow, higher density communications and data processing boards are installed, drawing even more power. In 2012, communications power consumptions of networks and datacenters added up to 35% of the overall electricity use in the ICT sector (Figure 1). By 2017, networks and datacenters will use 50% of the electricity, and it will continue to grow.

  8. Tomi Engdahl says:

    Urs Hölzle / The Keyword:
    Google says it will run its global operations entirely on renewable energy in 2017, claims it is the world’s largest corporate buyer of renewable power — Every year people search on Google trillions of times; every minute people upload more than 400 hours of YouTube videos.

  9. Tomi Engdahl says:

    Microsoft is going to run a data center entirely on wind power

    The company just announced that it has inked deals with two wind farms, with the aim of entirely powering its Cheyenne, Wyoming data center from renewable sources. Microsoft has reportedly contracted Bloom Wind farm in Kansas to provide 178 megawatts, and the Silver Sage and Happy Jack farms in Wyoming to provide an additional 59 megawatts.

    As noted at TheNextWeb, ” Microsoft has also revealed that the site’s backup generators will be used as a ‘secondary resource’ for the local grid. This means they will actually provide energy to the local community during periods of high demand. These backup generators will burn natural gas, which, despite being a fossil fuel, is far less ecologically damaging than diesel.

  10. Tomi Engdahl says:

    EPA begins process to improve computer server efficiency

    The U.S. Environmental Protection Agency (EPA) is aiming to improve the energy efficiency of future computer servers. A few months ago, the agency published Draft 1, Version 3 of its ENERGY STAR Computer Server Specification.

    In order to be eligible for the program, a server must meet all of the following criteria:

    Marketed and sold as a computer server
    Packaged and sold with at least one AC-DC or DC-DC power supply
    Designed for and listed as supporting at least one or more computer server operating systems and/or hypervisors
    Targeted to run user-installed enterprise applications
    Provide support for ECC and/or buffered memory
    Designed so all processors have access to shared system memory and are visible to a single OS or hypervisor

    Excluded products include fully fault tolerant servers, server appliances, high performance computing systems, large servers, storage products including blade storage, and network equipment.

  11. Tomi Engdahl says:

    Jacob Kastrenakes / The Verge:
    California adopts energy standards requiring idle computers to draw less power; energy commission estimates 6% of desktops, 73% of laptops meet standards — California became the first state in the US to approve energy efficiency requirements for laptops, desktops, and monitors today …

    California approves first US energy efficiency standards for computers

    California became the first state in the US to approve energy efficiency requirements for laptops, desktops, and monitors today, in a change that could ultimately impact computers’ energy efficiency across the country.

    The new standards, approved by California’s Energy Commission, require most computers to draw less power while idle. Laptops are only required to see a slight reduction in power draw, since they’re already designed to be energy efficient; the commission estimates that 73 percent of shipping laptops won’t need any sort of change.

  12. Tomi Engdahl says:

    Japan’s research institution RIKEN once again captured the top spot on the Green500 list with its Shoubu supercomputer, the most energy-efficient system in the world. With rating of 6673.84 MFLOPS/Watt, Shoubu edged out another RIKEN system, Satsuki, the number 2 system that delivered 6195.22 MFLOPS/Watt.

    Both are “ZettaScaler”supercomputers, employing Intel Xeon processors and PEZY-SCnp manycore accelerators.

    The 3rd most energy-efficient system is China’s Sunway TaihuLight, which currently holds the number 1 spot on the TOP500 list as the world’s fastest supercomputer. It is powered solely by Sunway’s SW26010 processors and represents the first homogenous supercomputer in the top 10 of the Green500 since a set of IBM Blue Gene/Q systems occupied six of the top 10 spots in June 2013.

    The Satsuki and TaihuLight supercomputers are the only new entries in the top 10. Overall, there are 157 new systems in the June 2016 edition of the Green500, representing nearly a third of the list.

  13. Tomi Engdahl says:

    IEEE says zero hot air in Fujitsu liquid immersion cooling for data centers

    Given the prodigious heat generated by the trillions of transistors switching on and off 24 hours a day in data centers, air conditioning has become a major operating expense. Consequently, engineers have come up with several imaginative ways to ameliorate such costs, which can amount to a third or more of data center operations.
    One favored method is to set up hot and cold aisles of moving air through a center to achieve maximum cooling efficiency. Meanwhile, Facebook has chosen to set up a data center in Lulea, northern Sweden on the fringe of the Arctic Circle to take advantage of the natural cold conditions there; and Microsoft engineers have seriously proposed putting server farms under water.

    Fujitsu, on the other hand, is preparing to launch a less exotic solution: a liquid immersion cooling system it says will usher in a “next generation of ultra-dense data centers.”

    Fujitsu Liquid Immersion Not All Hot Air When It Comes to Cooling Data Centers

  14. Tomi Engdahl says:

    GaN Technology: A Lean, Green (Power) Machine

    Sponsored by: Texas Instruments. The devil is in the details when designing a Titanium-grade power supply with gallium-nitride technology, from driver circuits and new power design topologies to digital control schemes and new product qualification tests.

    Electricity is the world’s fastest-growing form of end-use energy consumption. The U.S. Energy Information Administration (EIA) estimates that worldwide generating capacity will grow to 36.5 million megawatt-hours by 2040, a 69% increase from 2012, driven by rising incomes in China, India, and other emerging Asian economies. Electricity generation in the U.S. will grow 24% by 2040—about 1% annually.

    Houston, we got a problem…. the EIA also estimates that some 6% of electricity generated in the U.S. goes to waste in supply and disposition—more than 14 million megawatt-hours annually at current rates of consumption. Reducing just a portion of this waste through efficiency improvements could make it possible to slow the growth of demand, and accelerate the closing of inefficient and polluting coal-fired power plants.

    As a result, governments and regulatory agencies worldwide are moving to implement standards for energy efficiency.

    The 80 Plus standards, now part of Energy Star in the U.S., cover computer power supplies.

    the latest Titanium standard requires a maximum efficiency of up to 96% from ac input to dc output.

    Meeting these new standards requires rethinking every building block in a power supply, and GaN technology is playing an increasing role.

  15. Tomi Engdahl says:

    3 end-user segments drive global green data center market

    A new market study released by Technavio forecasts the global green data center market to reach USD $55 billion by 2021, growing at a CAGR of almost 14 percent. The technology analyst’s new report, “Global Green Data Center Market 2017-2021,” states that the market is witnessing most growth through the construction of data center by cloud service providers (CSPs), colocation service providers, and telecommunication providers globally.

    “Currently, there are many enterprises and CSPs that are involved in colocation spaces rather than constructing their own facilities to address demands immediately,” adds the firm. “CSPs and ISPs, such as Facebook, are the major contributors to the green data center market, which will continue throughout the forecast period.”

    In the report, Technavio’s ICT research analysts categorize the global green data center market into the following segments by end-user: IT infrastructure; power solutions; general construction; cooling solutions; and monitoring and management. The top three end-user segments are thusly discussed by the report’s summary.

    IT infrastructure: Technavio contends that “digitalization has enabled several organizations to adopt cloud-based services for their businesses. By 2020, it is expected that 90 percent of small and medium enterprises will operate their businesses through cloud storage either by colocating their infrastructure or by adopting cloud offerings by the major CSPs in the market.”

    Power solutions: The firm expects the global green data center market “will witness a significant growth in revenue because of the increasing concerns regarding the costs incurred due to increased power consumption and wastage of power in data center operations. There is more focus on reducing the environmental impact of the data center facilities along with power consumption,”

    General construction: Technavio expects the general construction market will grow along with the growth in brick-and-motor facility and modular data center construction projects worldwide. “Increased power consumption and carbon emission have resulted in the construction of eco-friendly data centers. Most of these data centers are being constructed in remote areas,” adds Sharma.

  16. Tomi Engdahl says:

    Making Energy-Efficient ICs Energy Efficiently

    Semiconductor manufacturers are able to make an increasingly important contribution to ensuring that end products use the minimal amount of energy and are efficient.

    There is significant global focus on energy efficiency, and we are all encouraged to use less energy and make energy-efficient choices, whether it be a new washing machine or considering the overall energy efficiency of the buildings in which we live and work.

    Semiconductor manufacturers are able to make an increasingly important contribution to ensuring that end products from cars to vacuum cleaners and laptops to factory automation equipment use the minimal amount of energy and are efficient. Indeed, ON Semiconductor’s mantra is “energy-efficient innovations.”

    To put it in perspective, worldwide energy consumption was over 20 petawatt-hours (PWh) in 2015; that’s equivalent to $2.4 trillion. Of the energy consumed, around 50 percent was by electric-motor-driven systems, which are, in turn, controlled, managed and regulated by semiconductor devices.

    Our increasingly “electrified” world means that, despite all of the products we use becoming more efficient, the net requirement for power is still on the increase; in fact, some estimates suggest that demand will have grown by 55 percent between 2005 and 2030.

    Electronics technology in general — and, most notably, semiconductors — have been a great enabler in recent years for making existing iterations of everything from notebook computers to washing machines more frugal when it comes to their power requirements and, in so doing, placing less demand on the grid and, therefore, power generation itself.

    But it mustn’t be overlooked that the actual process of making semiconductors can be extremely resource-hungry.

  17. Tomi Engdahl says:

    Green data center market: Opportunities and forecast, 2016-2023

    The green data center is a warehouse for storage, administration, and distribution of data in which electrical and computer systems are used to minimize power and carbon footprint. The construction and operation of a green data center includes progressive technologies and strategies that help IT organizations to reduce environmental impact by gauging, scheduling, and implementing initiatives around the data center environment.

    Green Data Center Market – Opportunities and Forecast, 2016-2023

  18. Tomi Engdahl says:

    major market players such as Digital Realty Trust, Inc., IBM Corporation, Hitachi Ltd., Cisco System, Inc., Hewlett-Packard Inc., DuPont Fabros Technology, CyrusOne, Eaton Corporation, Dell Inc., and EMC Corporation

  19. Tomi Engdahl says:

    First self-powered data center opens

    Aruba S.p.A. operates its zero-impact data center using ‘a river of energy’ hydroelectric plant, solar panels and chilling underground water.

    What does it take to open the world’s first self-powered data center? For Aruba S.p.A., it involved three elements:

    Flowing river water
    Photovoltaic solar panels
    Always cold, pumped-to-the-surface underground water as the principal cooling source

    Aruba’s newest data center, named the Global Cloud Data Center (IT3) is located near Milan, Italy, and claims to be 100 percent green. The 49-acre ANSI/TIA-942 Rating 4 standard facility (at 200,000 square meters) opened earlier this month.

    Low-impact credentials at the site come largely because the data center has its own dedicated hydroelectric plant.

    The system, along with the power from solar panels, can produce up to 90 MWs of power. The “river of energy” flows “more or less” constantly, Aruba says.

    Geothermal cooling

    The company says cooling at the facility is also zero-impact.

    “Using groundwater as the main cooling energy source enables us to reduce energy waste,” the company explains of its geothermal system on its website.

    That’s in part because underground water at the site always remains at 48 degrees Fahrenheit throughout the year. That cold water is pumped up from the ground, used to cool the data halls via heat exchangers, and then returned back into the earth. By doing that, environmental impacts do not happen, the firm says.

    Other eco-friendly techniques are in operation, too: Distinct ducts in the server rack design aid efficiency by targeting underground-cooled air onto the parts of the rack that need cooling the most. Also, double insulation with a defrost system is used in the data room construction.

    The largest, state-of-the-art data center campus in Italy

    The Global Cloud Data Center is a data center campus with a surface area of 200,000m2 in Ponte San Pietro (BG), just a few minutes from Milan. All the systems have been designed and built to meet and exceed the highest levels of resilience set by ANSI/TIA 942-A Rating 4 (formerly Tier 4).

    A surface area of 90,000m2 dedicated to the data center in a total area of 200,000m2
    Maximum logical and physical security, with armed guards 24/7 and 7 different security perimeters
    Up to 90MW of power, with self-produced hydroelectric and photovoltaic energy
    Double multi-modular power center with UPS boasting 2N + 1 redundancy
    Made-to- measure power of up to 40kW per rack
    Redundant emergency generators with 48-hour full-load autonomy without refuelling
    Data hall made entirely of firewalls and ceiling with double insulation
    Carrier neutral data center with optional managed connectivity
    Made-to-measure colocation solutions: from rack units to a dedicated data center
    Storage and office space available to customers

  20. Tomi Engdahl says:

    Space-radiated cooling cuts power use 21%

    Radiative sky cooling sends heat from buildings out into space to be chilled. Electricity use ultimately will be slashed compared to traditional air conditioning, scientists say.

    Using the sky as a free heat sink could be a solution to an impending energy crunch caused by increased data use. More data generated in the future will require evermore electricity-intensive cooling — the data centers will be getting bigger.

    Researchers at Stanford University think they have a solution to cooling creep. They say the way to reel in the cost of getting buildings cold enough for all the servers is to augment land-based air conditioning by sending excess heat into space and chilling it there.

    The scientists say cost savings will be in the order of 21 percent through a system they’ve been working on, and up to 70 percent, theoretically, by combining the kit with other, newer radiant systems, according to an article in IEEE Spectrum

    Efficient Air-Conditioning Beams Heat Into Space

    The Stanford team’s passive cooling system chills water by a few degrees with the help of radiative panels that absorb heat and beam it directly into outerspace. This requires minimal electricity and no water evaporation, saving both energy and water. The researchers want to use these fluid-cooling panels to cool off AC condensers.

    They first reported their passive radiative cooling idea in 2014. In the new work reported in Nature Energy, they’ve taken the next step with a practical system that chills water. They’ve also established a startup, SkyCool Systems, to commercialize the technology.

    The team tested it on a rootop on the Stanford campus. Over three days of testing, they found that water temperatures went down by between 3- and 5 °C. The only electricity it requires is what’s needed to pump water through the copper pipes. Water that flowed more slowly was cooled more.

    New radiant cooling systems, which use chilled water running through aluminum panels or pipes, are getting more common in Europe and China and in high-efficiency buildings in the U.S., says Raman. “If we could couple our system with such radiant cooling systems, we could get 70 percent efficiency savings.”

  21. Tomi Engdahl says:

    The Environmental Cost of Internet Porn

    So many people watch porn online that the industry’s carbon footprint might be worse now that it was in the days of DVDs and magazines.

    Online streaming is a win for the environment. Streaming music eliminates all that physical material—CDs, jewel cases, cellophane, shipping boxes, fuel—and can reduce carbon-dioxide emissions by 40 percent or more. Video streaming is still being studied, but the carbon footprint should similarly be much lower than that of DVDs.

    Scientists who analyze the environmental impact of the internet tout the benefits of this “dematerialization,” observing that energy use and carbon-dioxide emissions will drop as media increasingly can be delivered over the internet. But this theory might have a major exception: porn.

    Is pornography in the digital era leaving a larger carbon footprint than it did during the days of magazines and videos?

    But if pornography experts’ estimates are accurate, they suggest a rare scenario where digitization might have increased the overall consumption of porn so much that the principal of dematerialization gets flipped on its head. The internet could allow people to spend so much time looking at porn that it’s actually worse for the environment.

  22. Tomi Engdahl says:

    DCD>Energy Smart highlights EU data center heat recovery

    DCD’s event at the Stockholm Brewery Conference Center in March 2018, will help the digital infrastructure industry deal with rising energy demands.

    Recycling generates profits

    “Through the adoption of various energy efficiency measures, the data center industry together with the energy utilities can build scalable, flexible, and green data centers which are dynamic in their infrastructure,” says Jan Sjögren, head of global ICT centers building operations at Ericsson who will be speaking at the event. “There is a great opportunity for the data center to recycle their waste heat, where we can potentially save on energy cost whilst generating profits as producers.”

    Across Europe, cities such as Amsterdam, Paris, Odense, Dresden and Stockholm are betting big on this approach, creating a new business model for the tech industry worldwide.

    “Rising energy consumption is of great concern to the data center industry. The trend towards utilizing clean energy will redefine future data center location strategies. With the rise of edge computing, we will see a network of distributed compute. Edge compute close to dense population areas, and large data centers in close proximity to power plants, with re-use of energy, will increasingly benefit operators,” says Tor Björn Minde, CEO at RISE SICS North AB, who will be speaking on this topic at the event.

    “Data centers seek solutions to increase energy efficiency and lower cost”

  23. Tomi Engdahl says:

    Formal commissioning event lauds utility-scale 179 MW solar power plant delivering 100% renewable energy to Switch data centers in Nevada

    The Switch Station 1 and Switch Station 2 solar power plants, with a combined generation capacity of 179 Megawatts, were formally recognized in a commissioning event held in Apex, Nevada last Dec. 11 as fully commissioned and in commercial operation. The celebration was attended by government officials, project owners and the energy offtaker.

    U.S. Senator Harry Reid, U.S Bureau of Land Management Nevada Director, John Ruhs, Clark County Commissioner Chairman Steve Sisolak, Nevada State Energy Office Director Angela Dykema and other federal, state and local leaders joined executives from Switch, EDF Renewable Energy, NV Energy, J.P. Morgan and First Solar, Inc. (NASDAQ: FSLR) to ceremonially “throw the switch” marking the delivery of solar power to Switch data centers in Las Vegas and Reno.

    Following an expedited permitting process, construction of the project took approximately 12 months, creating about 550,000 workhours, and had a total construction workforce of 1,300. Combined, the power plants cover about 1,797 acres and are comprised of 1,980,840 solar panels, the equivalent of 275 football fields, and 5,450,056 feet of cable, equaling 1,032 miles or the distance from Las Vegas to Seattle. The 179 MW of power generates enough clean solar energy to meet the consumption of 46,000 homes, displacing approximately 265,000 metric tons of carbon dioxide (CO2) annually, equal to taking about 52,000 cars off the road.

    “Less than a decade ago, Nevada’s solar energy landscape was nonexistent”

  24. Tomi Engdahl says:

    How ‘wasteful’ data centers can become energy producers

    Turns out recovering “waste” heat from servers in data centers opens the door to innovation and efficiency. Writing in Consulting-Specifying Engineer, Bill Kosik of U.S. Services (Chicago) provides an overview of heat-recovery systems for commercial and institutional buildings, and offers discussion on the issues pertaining to design parameters, efficiency, and opportunities to recycle energy.

    The article discusses heat recovery from data centers, specifically demonstrating how data centers with water-cooled computer equipment will create highly effective energy-recycling processes.

    Can data centers become energy producers?
    Recovering “waste” heat from servers in data centers opens the door to innovation and efficiency.

    Research on heat recovery

    The U.S. Department of Energy (DOE) published a paper on the impacts of using heat recovery in all kinds of applications, primarily in the manufacturing sector. The DOE classified the different processes into different temperature groups: low, 1,200°F. The water temperatures in a data center will fall into the “low” category.

    While it might seem that applications that fall into the “low” category will be less beneficial as compared with the other categories, the DOE reports that applications in the 77⁰ to 300⁰F temperature range represent nearly 80% of the total estimated waste heat. (Temperatures of cooling water exiting water-cooled computers will generally have water temperatures in that will range from 100⁰ to 150⁰F). Based on this, it is possible that HVAC industry will see greater opportunity and develop products and approaches that will make heat recovery in the data center more viable. Also, it is encouraging to know that the average simple payback for 19 sample heat-recovery projects is 1.5 years.

    Types of systems

    There are many different methods of designing heat-recovery systems for commercial and institutional buildings. These methods cover a large variety of building types and HVAC system configurations. But as a starting point, the following examples demonstrate the core principles of energy-recovery systems:

    Runaround coil loop: This moves heat from the exhaust airstream to the supply airstream using two heat exchangers (finned coils) and a heat-transfer fluid, usually water or glycol. A pump is required to move the fluid through the system. A runaround coil loop is best used in situations when the airstreams are not adjacent to each other. If the temperature of the fluid heated by the exhaust air is greater than the outdoor air, energy can be recovered. However, to make this system financially feasible, the warm-water temperature must be considerably greater than the supply-air temperature. And long runs of piping add to material, installation, and energy costs (increased pump power). Generally, the economics of this system are more favorable for facilities located in colder climates that have considerable amounts of constant exhaust air.
    Packaged energy-recovery systems: In commercial and institutional buildings, packaged energy-recovery systems are the simplest and most effective heat-reclaim solution. Many manufacturers offer products that are designed specifically to work with typical air conditioning and ventilation loads for office environments, schools, libraries, etc. One common way of implementing this type of energy-recovery system is to use air handling units with integrated heat wheels, thermosyphons, air-to-air heat exchangers, or evaporatively cooled air-to-air heat exchangers. Each one of these configurations will have different heat-transfer and efficiency characteristics and must be analyzed based on the actual loads, climate, space available, maintenance characteristics, etc.
    Total energy wheel: Most energy-recovery devices transfer heat (sensible) energy only. An enthalpy wheel, or total energy wheel, exchanges both heat (sensible) energy and moisture (latent) energy between the supply and return airstreams. This type of wheel can be coated with a desiccant (moisture-absorbing) material and is rotated between the incoming fresh air and the return air. Heat and moisture in the return air are transferred to the wheel. During cold, dry outdoor conditions, the outside air passes over the rotating wheel, providing pre-conditioning, as the heat and moisture are transferred to the air stream. In the cooling mode, the outside air is precooled and the moisture content is lowered. These processes reduce the amount of energy required by gas-fired heating equipment and compressorized cooling equipment.

    Energy recovery in data centers

    Air-based energy recovery: Data centers that do not use water-cooled computers can still benefit from recovering energy. 75⁰F outdoor air, very little (if any) mechanical cooling energy is needed.

    Water-based heat recovery: Many of the energy-recovery techniques discussed so far (both air- and water-based), are applicable to air handling systems using exhaust/return- and supply-air arrangements. However, depending on the facility type, it also is possible to recover energy from other types of processes that use water to cool the internal components of equipment, then transfer the heat to another process. Historically instead of recovering the heat in these processes, heat has typically been rejected to the outside, losing recoverable energy.

    In data centers, water-cooled computers enable hydronic heat-recovery options. There are several options for cooling computers with water (some of these solutions also work with refrigerant in lieu of water):

    IT cabinets with integral chilled-water fan coil units (side cars).
    Rear-door chilled-water heat exchangers, with or without fans, mounted on the back of the IT cabinet.
    An IT cabinet and servers that have built-in thermal planes. When the servers are installed into the cabinet, the planes contact one-another enabling heat transfer from the server to the circulating cooling water.
    Internal heat sinks mounted directly to the server’s central processing unit (CPU), dual in-line memory modules (DIMM), and the graphics processing unit (GPU). The heat sinks are cooled directly by the water, without additional intermediate heat exchangers. Since the cooling water contacts the sources of heat directly (via heat sinks) the process is very efficient.

  25. Tomi Engdahl says:

    Renewable energy in data centers on the rise: IHS Markit

    IHS Markit’s Maggie Shillington, cloud and data centers analyst, says that data centers are behind between 2% and 3% of developed countries’ electricity consumption, with the electricity required for cooling being the most significant operational cost for most data centers. Many data center operators are turning to renewable energy sources to meet such needs, she says in a new report.

    Shillington notes that although onsite generation in data centers, including wind and solar power, are among the most popular renewable energy methods, offsite renewable energy sources, such as renewable energy suppliers and utility companies, are the most straightforward way to attain renewable energy for data centers. Eliminating the large upfront capital expenses to produce onsite renewable energy, offsite generation removes the geographical limitations of renewable energy production approaches.


Leave a Comment

Your email address will not be published. Required fields are marked *