<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Google Data Center Gallery</title>
	<atom:link href="http://www.epanorama.net/blog/2012/10/19/google-data-center-gallery/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.epanorama.net/blog/2012/10/19/google-data-center-gallery/</link>
	<description>All about electronics and circuit design</description>
	<lastBuildDate>Wed, 15 Apr 2026 22:25:15 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.9.14</generator>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/10/19/google-data-center-gallery/comment-page-1/#comment-1478439</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 10 Mar 2016 10:35:33 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=14178#comment-1478439</guid>
		<description><![CDATA[Google Joins Facebook&#039;s Open Compute Project 
http://hardware.slashdot.org/story/16/03/10/0038246/google-joins-facebooks-open-compute-project

Google has elected to open up some of its data center designs, which it has -- until now -- kept to itself. Google has joined the Open Compute Project, which was set up by Facebook to share low-cost, no-frills data center hardware specifications. Google will donate a specification for a rack that it designed for its own data centers. Google&#039;s first contribution will be &quot;a new rack specification that includes 48V power distribution and a new form factor to allow OCP racks to fit into our data centers,

Google joins Facebook’s Open Compute Project, will donate rack design
Google pulls back the curtain from some of its data center equipment.
http://arstechnica.com/information-technology/2016/03/google-joins-facebooks-open-compute-project-will-donate-rack-design/

Google today said it has joined the Open Compute Project (OCP), and the company will donate a specification for a rack that it designed for its own data centers.

Google&#039;s first contribution will be &quot;a new rack specification that includes 48V power distribution and a new form factor to allow OCP racks to fit into our data centers,&quot; the company said. Google will also be participating in this week&#039;s Open Compute Summit.

&quot;In 2009, we started evaluating alternatives to our 12V power designs that could drive better system efficiency and performance as our fleet demanded more power to support new high-performance computing products, such as high-power CPUs and GPUs,&quot; Google wrote. &quot;We kicked off the development of 48V rack power distribution in 2010, as we found it was at least 30 percent more energy-efficient and more cost-effective in supporting these higher-performance systems.&quot;

OCP Summit: Google joins and shares 48V tech
http://www.datacenterdynamics.com/power-cooling/ocp-summit-google-joins-and-shares-48v-tech/95835.article

Google has joined the Open Compute Project, and is contributing 48V DC power distribution technology to the group, which Facebook created to share efficient data center hardware designs.

Urs Hölzle, Google’s senior vice president of technology, made the surprise announcement at the end of a lengthy keynote session on the first day of the Open Compute event. The 48V direct current “shallow” data center rack, has long been a part of Google’s mostl-secret data center architecture, but the giant now wants to share it.

Hölzle said Google’s 48V rack specifications had increased its energy efficiency by 30 precent, through eliminating the multiple transformers usually deployed in a data center.

Google is submitting the specification to OCP, and is now working with Facebook on a standard that can be built by vendors, and which Google and Facebook could both adopt, he said. 

“We have several years of experience with this,” said Hölzle, as Google has deployed 48V technology across large data centers.

As well as using a simplified power distribution, Google’s racks are shallower than the norm, because IT equipment can now be built in shorter units. Shallower racks mean more aisles can fit into a given floorspace. 

Google is joining OCP because there is no need for multiple 48V distribution standards, said Hölzle, explaining that open source is good for “non-core” technologies, where “everyone benefits from a standardized solution”.]]></description>
		<content:encoded><![CDATA[<p>Google Joins Facebook&#8217;s Open Compute Project<br />
<a href="http://hardware.slashdot.org/story/16/03/10/0038246/google-joins-facebooks-open-compute-project" rel="nofollow">http://hardware.slashdot.org/story/16/03/10/0038246/google-joins-facebooks-open-compute-project</a></p>
<p>Google has elected to open up some of its data center designs, which it has &#8212; until now &#8212; kept to itself. Google has joined the Open Compute Project, which was set up by Facebook to share low-cost, no-frills data center hardware specifications. Google will donate a specification for a rack that it designed for its own data centers. Google&#8217;s first contribution will be &#8220;a new rack specification that includes 48V power distribution and a new form factor to allow OCP racks to fit into our data centers,</p>
<p>Google joins Facebook’s Open Compute Project, will donate rack design<br />
Google pulls back the curtain from some of its data center equipment.<br />
<a href="http://arstechnica.com/information-technology/2016/03/google-joins-facebooks-open-compute-project-will-donate-rack-design/" rel="nofollow">http://arstechnica.com/information-technology/2016/03/google-joins-facebooks-open-compute-project-will-donate-rack-design/</a></p>
<p>Google today said it has joined the Open Compute Project (OCP), and the company will donate a specification for a rack that it designed for its own data centers.</p>
<p>Google&#8217;s first contribution will be &#8220;a new rack specification that includes 48V power distribution and a new form factor to allow OCP racks to fit into our data centers,&#8221; the company said. Google will also be participating in this week&#8217;s Open Compute Summit.</p>
<p>&#8220;In 2009, we started evaluating alternatives to our 12V power designs that could drive better system efficiency and performance as our fleet demanded more power to support new high-performance computing products, such as high-power CPUs and GPUs,&#8221; Google wrote. &#8220;We kicked off the development of 48V rack power distribution in 2010, as we found it was at least 30 percent more energy-efficient and more cost-effective in supporting these higher-performance systems.&#8221;</p>
<p>OCP Summit: Google joins and shares 48V tech<br />
<a href="http://www.datacenterdynamics.com/power-cooling/ocp-summit-google-joins-and-shares-48v-tech/95835.article" rel="nofollow">http://www.datacenterdynamics.com/power-cooling/ocp-summit-google-joins-and-shares-48v-tech/95835.article</a></p>
<p>Google has joined the Open Compute Project, and is contributing 48V DC power distribution technology to the group, which Facebook created to share efficient data center hardware designs.</p>
<p>Urs Hölzle, Google’s senior vice president of technology, made the surprise announcement at the end of a lengthy keynote session on the first day of the Open Compute event. The 48V direct current “shallow” data center rack, has long been a part of Google’s mostl-secret data center architecture, but the giant now wants to share it.</p>
<p>Hölzle said Google’s 48V rack specifications had increased its energy efficiency by 30 precent, through eliminating the multiple transformers usually deployed in a data center.</p>
<p>Google is submitting the specification to OCP, and is now working with Facebook on a standard that can be built by vendors, and which Google and Facebook could both adopt, he said. </p>
<p>“We have several years of experience with this,” said Hölzle, as Google has deployed 48V technology across large data centers.</p>
<p>As well as using a simplified power distribution, Google’s racks are shallower than the norm, because IT equipment can now be built in shorter units. Shallower racks mean more aisles can fit into a given floorspace. </p>
<p>Google is joining OCP because there is no need for multiple 48V distribution standards, said Hölzle, explaining that open source is good for “non-core” technologies, where “everyone benefits from a standardized solution”.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/10/19/google-data-center-gallery/comment-page-1/#comment-1436186</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 17 Sep 2015 18:33:20 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=14178#comment-1436186</guid>
		<description><![CDATA[Cade Metz / Wired: 	
Software needed to run every Google internet service spans 2B lines of source code, all in a single repository available to all 25K engineers  —  Google Is 2 Billion Lines of Code—And It&#039;s All in One Place  —  How big is Google?  We can answer that question in terms of revenue or stock price … 

Google Is 2 Billion Lines of Code—And It’s All in One Place
http://www.wired.com/2015/09/google-2-billion-lines-codeand-one-place/

How big is Google? We can answer that question in terms of revenue or stock price or customers or, well, metaphysical influence. But that’s not all. Google is, among other things, a vast empire of computer software. We can answer in terms of code.

Google’s Rachel Potvin came pretty close to an answer Monday at an engineering conference in Silicon Valley. She estimates that the software needed to run all of Google’s Internet services—from Google Search to Gmail to Google Maps—spans some 2 billion lines of code. By comparison, Microsoft’s Windows operating system—one of the most complex software tools ever built for a single computer, a project under development since the 1980s—is likely in the realm of 50 million lines.

So, building Google is roughly the equivalent of building the Windows operating system 40 times over.

The comparison is more apt than you might think. Much like the code that underpins Windows, the 2 billion lines that drive Google are one thing. They drive Google Search, Google Maps, Google Docs, Google+, Google Calendar, Gmail, YouTube, and every other Google Internet service, and yet, all 2 billion lines sit in a single code repository available to all 25,000 Google engineers. Within the company, Google treats its code like an enormous operating system. “Though I can’t prove it,” Potvin says, “I would guess this is the largest single repository in use anywhere in the world.”

Google is an extreme case. But its example shows how complex our software has grown in the Internet age—and how we’ve changed our coding tools and philosophies to accommodate this added complexity. Google’s enormous repository is available only to coders inside Google. But in a way, it’s analogous to GitHub, the public open source repository where engineers can share enormous amounts of code with the Internet at large. 

The two internet giants are working on an open source version control system that anyone can use to juggle code on a massive scale. It’s based on an existing system called Mercurial. “We’re attempting to see if we can scale Mercurial to the size of the Google repository,” Potvin says, indicating that Google is working hand-in-hand with programming guru Bryan O’Sullivan and others who help oversee coding work at Facebook.

That may seem extreme. After all, few companies juggle as much code as Google or Facebook do today. But in the near future, they will.]]></description>
		<content:encoded><![CDATA[<p>Cade Metz / Wired:<br />
Software needed to run every Google internet service spans 2B lines of source code, all in a single repository available to all 25K engineers  —  Google Is 2 Billion Lines of Code—And It&#8217;s All in One Place  —  How big is Google?  We can answer that question in terms of revenue or stock price … </p>
<p>Google Is 2 Billion Lines of Code—And It’s All in One Place<br />
<a href="http://www.wired.com/2015/09/google-2-billion-lines-codeand-one-place/" rel="nofollow">http://www.wired.com/2015/09/google-2-billion-lines-codeand-one-place/</a></p>
<p>How big is Google? We can answer that question in terms of revenue or stock price or customers or, well, metaphysical influence. But that’s not all. Google is, among other things, a vast empire of computer software. We can answer in terms of code.</p>
<p>Google’s Rachel Potvin came pretty close to an answer Monday at an engineering conference in Silicon Valley. She estimates that the software needed to run all of Google’s Internet services—from Google Search to Gmail to Google Maps—spans some 2 billion lines of code. By comparison, Microsoft’s Windows operating system—one of the most complex software tools ever built for a single computer, a project under development since the 1980s—is likely in the realm of 50 million lines.</p>
<p>So, building Google is roughly the equivalent of building the Windows operating system 40 times over.</p>
<p>The comparison is more apt than you might think. Much like the code that underpins Windows, the 2 billion lines that drive Google are one thing. They drive Google Search, Google Maps, Google Docs, Google+, Google Calendar, Gmail, YouTube, and every other Google Internet service, and yet, all 2 billion lines sit in a single code repository available to all 25,000 Google engineers. Within the company, Google treats its code like an enormous operating system. “Though I can’t prove it,” Potvin says, “I would guess this is the largest single repository in use anywhere in the world.”</p>
<p>Google is an extreme case. But its example shows how complex our software has grown in the Internet age—and how we’ve changed our coding tools and philosophies to accommodate this added complexity. Google’s enormous repository is available only to coders inside Google. But in a way, it’s analogous to GitHub, the public open source repository where engineers can share enormous amounts of code with the Internet at large. </p>
<p>The two internet giants are working on an open source version control system that anyone can use to juggle code on a massive scale. It’s based on an existing system called Mercurial. “We’re attempting to see if we can scale Mercurial to the size of the Google repository,” Potvin says, indicating that Google is working hand-in-hand with programming guru Bryan O’Sullivan and others who help oversee coding work at Facebook.</p>
<p>That may seem extreme. After all, few companies juggle as much code as Google or Facebook do today. But in the near future, they will.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/10/19/google-data-center-gallery/comment-page-1/#comment-1427724</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 20 Aug 2015 07:05:37 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=14178#comment-1427724</guid>
		<description><![CDATA[Google loses data as lightning strikes
http://www.bbc.com/news/technology-33989384

Google says data has been wiped from discs at one of its data centres in Belgium - after it was struck by lightning four times.

Some people have permanently lost access to their files as a result.

A number of disks damaged following the lightning strikes did, however, later became accessible.

Generally, data centres require more lightning protection than most other buildings.

While four successive strikes might sound highly unlikely, lightning does not need to repeatedly strike a building in exactly the same spot to cause additional damage.

Justin Gale, project manager for the lightning protection service Orion, said lightning could strike power or telecommunications cables connected to a building at a distance and still cause disruptions.

&quot;The cabling alone can be struck anything up to a kilometre away, bring [the shock] back to the data centre and fuse everything that&#039;s in it,&quot; he said.

In an online statement, Google said that data on just 0.000001% of disk space was permanently affected.

&quot;Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain,&quot; it said.

The company added it would continue to upgrade hardware and improve its response procedures to make future losses less likely.

Google Compute Engine Incident #15056
https://status.cloud.google.com/incident/compute/15056#5719570367119360

Google Compute Engine Persistent Disk issue in europe-west1-b 

From Thursday 13 August 2015 to Monday 17 August 2015, errors occurred on a small proportion of Google Compute Engine persistent disks in the europe-west1-b zone. The affected disks sporadically returned I/O errors to their attached GCE instances, and also typically returned errors for management operations such as snapshot creation. In a very small fraction of cases (less than 0.000001% of PD space in europe-west1-b), there was permanent data loss.

ROOT CAUSE:

At 09:19 PDT on Thursday 13 August 2015, four successive lightning strikes on the local utilities grid that powers our European datacenter caused a brief loss of power to storage systems which host disk capacity for GCE instances in the europe-west1-b zone. Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain. In almost all cases the data was successfully committed to stable storage, although manual intervention was required in order to restore the systems to their normal serving state. However, in a very few cases, recent writes were unrecoverable, leading to permanent data loss on the Persistent Disk.

This outage is wholly Google&#039;s responsibility.]]></description>
		<content:encoded><![CDATA[<p>Google loses data as lightning strikes<br />
<a href="http://www.bbc.com/news/technology-33989384" rel="nofollow">http://www.bbc.com/news/technology-33989384</a></p>
<p>Google says data has been wiped from discs at one of its data centres in Belgium &#8211; after it was struck by lightning four times.</p>
<p>Some people have permanently lost access to their files as a result.</p>
<p>A number of disks damaged following the lightning strikes did, however, later became accessible.</p>
<p>Generally, data centres require more lightning protection than most other buildings.</p>
<p>While four successive strikes might sound highly unlikely, lightning does not need to repeatedly strike a building in exactly the same spot to cause additional damage.</p>
<p>Justin Gale, project manager for the lightning protection service Orion, said lightning could strike power or telecommunications cables connected to a building at a distance and still cause disruptions.</p>
<p>&#8220;The cabling alone can be struck anything up to a kilometre away, bring [the shock] back to the data centre and fuse everything that&#8217;s in it,&#8221; he said.</p>
<p>In an online statement, Google said that data on just 0.000001% of disk space was permanently affected.</p>
<p>&#8220;Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain,&#8221; it said.</p>
<p>The company added it would continue to upgrade hardware and improve its response procedures to make future losses less likely.</p>
<p>Google Compute Engine Incident #15056<br />
<a href="https://status.cloud.google.com/incident/compute/15056#5719570367119360" rel="nofollow">https://status.cloud.google.com/incident/compute/15056#5719570367119360</a></p>
<p>Google Compute Engine Persistent Disk issue in europe-west1-b </p>
<p>From Thursday 13 August 2015 to Monday 17 August 2015, errors occurred on a small proportion of Google Compute Engine persistent disks in the europe-west1-b zone. The affected disks sporadically returned I/O errors to their attached GCE instances, and also typically returned errors for management operations such as snapshot creation. In a very small fraction of cases (less than 0.000001% of PD space in europe-west1-b), there was permanent data loss.</p>
<p>ROOT CAUSE:</p>
<p>At 09:19 PDT on Thursday 13 August 2015, four successive lightning strikes on the local utilities grid that powers our European datacenter caused a brief loss of power to storage systems which host disk capacity for GCE instances in the europe-west1-b zone. Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain. In almost all cases the data was successfully committed to stable storage, although manual intervention was required in order to restore the systems to their normal serving state. However, in a very few cases, recent writes were unrecoverable, leading to permanent data loss on the Persistent Disk.</p>
<p>This outage is wholly Google&#8217;s responsibility.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/10/19/google-data-center-gallery/comment-page-1/#comment-1410536</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 23 Jun 2015 08:03:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=14178#comment-1410536</guid>
		<description><![CDATA[Google Reveals Data Center Net
Openflow rising at search giant and beyond
http://www.eetimes.com/document.asp?doc_id=1326901&amp;

Software-defined networks based on the Openflow standard are beginning to gain traction with new chip, system and software products on display at the Open Networking Summit here. At the event, Google revealed its data center networks are already using Openflow and AT&amp;T said (in its own way) it will follow suit.

Showing support for the emerging approach, systems giants from Brocade to ZTE participated in SDN demos for carriers, data centers and enterprises running on the show floor.

SDN aims to cut through a rat’s nest of existing protocols implemented in existing proprietary ASICs and applications programming interfaces. If successful, it will let users more easily configure and manage network tasks using high-level programs run on x86 servers.

The rapid rise of mobile and cloud traffic is driving the need for SDN. For example, Google has seen traffic in its data centers rise 50-fold in the last six years, said Amin Vahdat, technical lead for networking at Google.

In a keynote here, Vahdat described Jupiter (above), Google’s data center network built internally to deal with the data flood. It uses 16x40G switch chips to create a 1.3 Petabit/second data center Clos network, and is the latest of five generations of SDN networks at the search giant.

“We are opening this up so engineers can take advantage of our work,” Vahdat said, declining to name any specific companies adopting its Jupiter architecture.]]></description>
		<content:encoded><![CDATA[<p>Google Reveals Data Center Net<br />
Openflow rising at search giant and beyond<br />
<a href="http://www.eetimes.com/document.asp?doc_id=1326901&#038;amp" rel="nofollow">http://www.eetimes.com/document.asp?doc_id=1326901&#038;amp</a>;</p>
<p>Software-defined networks based on the Openflow standard are beginning to gain traction with new chip, system and software products on display at the Open Networking Summit here. At the event, Google revealed its data center networks are already using Openflow and AT&amp;T said (in its own way) it will follow suit.</p>
<p>Showing support for the emerging approach, systems giants from Brocade to ZTE participated in SDN demos for carriers, data centers and enterprises running on the show floor.</p>
<p>SDN aims to cut through a rat’s nest of existing protocols implemented in existing proprietary ASICs and applications programming interfaces. If successful, it will let users more easily configure and manage network tasks using high-level programs run on x86 servers.</p>
<p>The rapid rise of mobile and cloud traffic is driving the need for SDN. For example, Google has seen traffic in its data centers rise 50-fold in the last six years, said Amin Vahdat, technical lead for networking at Google.</p>
<p>In a keynote here, Vahdat described Jupiter (above), Google’s data center network built internally to deal with the data flood. It uses 16x40G switch chips to create a 1.3 Petabit/second data center Clos network, and is the latest of five generations of SDN networks at the search giant.</p>
<p>“We are opening this up so engineers can take advantage of our work,” Vahdat said, declining to name any specific companies adopting its Jupiter architecture.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/10/19/google-data-center-gallery/comment-page-1/#comment-1408905</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 18 Jun 2015 08:51:46 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=14178#comment-1408905</guid>
		<description><![CDATA[Revealed: The Secret Gear Connecting Google’s Online Empire
http://www.wired.com/2015/06/google-reveals-secret-gear-connects-online-empire/

Three-and-a-half years ago, a strange computing device appeared at an office building in the tiny farmland town of Shelby, Iowa.

It was wide and thin and flat, kind of like a pizza box. On one side, there were long rows of holes where you could plug in dozens of cables. On the other, a label read “Pluto Switch.” But no one was quite sure what it was. The cable connectors looked a little strange. The writing on the back was in Finnish. 

It was a networking switch, a way of moving digital data across the massive computing centers that underpin the Internet. And it belonged to Google.

Google runs a data center not far from Shelby, and apparently, someone had sent the switch to the wrong place. After putting two and two together, those IT guys shipped it back to Google and promptly vanished from the ‘net. But the information they posted to that online discussion forum, including several photos of the switch, opened a small window into an operation implications for the Internet as a whole—an operation Google had never discussed in public. For several years, rather than buying traditional equipment from the likes of Cisco, Ericsson, Dell, and HP, Google had designed specialized networking gear for the engine room of its rapidly expanding online empire. Photos of the mysterious Pluto Switch provided a glimpse of the company’s handiwork.

Seeing such technology as a competitive advantage, Google continued to keep its wider operation under wraps. But it did reveal how it handled the networking links between its data centers, and now, as part of a larger effort to share its fundamental technologies with the world at large, it’s lifting the curtain inside its data centers as well.

According to Vahdat, Google started designing its own gear in 2004, under the aegis of a project called Firehose, and by 2005 or 2006, it had deployed a version of this hardware in at least a handful of data centers. The company not only designed “top-of-rack switches” along the lines of the Pluto Switch that turned up in Iowa. It created massive “cluster switches” that tied the wider network together. It built specialized “controller” software for running all this hardware. It even built its own routing protocol, dubbed Firehose, for efficiently moving data across the network. “We couldn’t buy the hardware we needed to build a network of the size and speed we needed to build,” Vahdat says. “It just didn’t exist.”

The aim, Vahdat says, was twofold. A decade ago, the company’s network had grown so large, spanning so many machines, it needed a more efficient way of shuttling data between them all. Traditional gear wasn’t up to the task. But it also needed a way of cutting costs. Traditional gear was too expensive. So, rather than construct massively complex switches from scratch, it strung together enormous numbers of cheap commodity chips.

Google’s online empire is unusual. It is likely the largest on earth. But as the rest of the Internet expands, others are facing similar problems. Facebook has designed a similar breed of networking hardware and software. And so many other online operations are moving in a same direction, including Amazon and Microsoft. AT&amp;T, one of the world’s largest Internet providers, is now rebuilding its network in similar ways. “We’re not talking about it,” says Scott Mair, senior vice president of technology planning and engineering at AT&amp;T. “We’re doing it.”

Unlike Google and Facebook, the average online company isn’t likely to build its own hardware and software. But so many startups are now offering commercial technology that mimics The Google Way.

Basically, they’re fashioning software that lets companies build complex networks atop cheap “bare metal” switches, moving the complexity out of the hardware and into the software. People call this software-defined networking, or SDN, and it provides a more nimble way of building, expanding, and reshaping computer networks.

“It gives you agility, and it gives you scale,” says Mark Russinovich, who has helped build similar software at Microsoft. “If you don’t have this, you’re down to programming individual devices—rather than letting a smart controller do it for you.”

It’s a movement that’s overturning the business models of traditional network vendors such as Cisco, Dell, and HP. Vahdat says that Google now designs 100 percent of the networking hardware used inside its data centers, using contract manufacturers in Asia and other locations to build the actual equipment. That means it’s not buying from Cisco, traditionally the world’s largest networking vendor. But for the Ciscos of the world, the bigger threat is that so many others are moving down the same road as Google.]]></description>
		<content:encoded><![CDATA[<p>Revealed: The Secret Gear Connecting Google’s Online Empire<br />
<a href="http://www.wired.com/2015/06/google-reveals-secret-gear-connects-online-empire/" rel="nofollow">http://www.wired.com/2015/06/google-reveals-secret-gear-connects-online-empire/</a></p>
<p>Three-and-a-half years ago, a strange computing device appeared at an office building in the tiny farmland town of Shelby, Iowa.</p>
<p>It was wide and thin and flat, kind of like a pizza box. On one side, there were long rows of holes where you could plug in dozens of cables. On the other, a label read “Pluto Switch.” But no one was quite sure what it was. The cable connectors looked a little strange. The writing on the back was in Finnish. </p>
<p>It was a networking switch, a way of moving digital data across the massive computing centers that underpin the Internet. And it belonged to Google.</p>
<p>Google runs a data center not far from Shelby, and apparently, someone had sent the switch to the wrong place. After putting two and two together, those IT guys shipped it back to Google and promptly vanished from the ‘net. But the information they posted to that online discussion forum, including several photos of the switch, opened a small window into an operation implications for the Internet as a whole—an operation Google had never discussed in public. For several years, rather than buying traditional equipment from the likes of Cisco, Ericsson, Dell, and HP, Google had designed specialized networking gear for the engine room of its rapidly expanding online empire. Photos of the mysterious Pluto Switch provided a glimpse of the company’s handiwork.</p>
<p>Seeing such technology as a competitive advantage, Google continued to keep its wider operation under wraps. But it did reveal how it handled the networking links between its data centers, and now, as part of a larger effort to share its fundamental technologies with the world at large, it’s lifting the curtain inside its data centers as well.</p>
<p>According to Vahdat, Google started designing its own gear in 2004, under the aegis of a project called Firehose, and by 2005 or 2006, it had deployed a version of this hardware in at least a handful of data centers. The company not only designed “top-of-rack switches” along the lines of the Pluto Switch that turned up in Iowa. It created massive “cluster switches” that tied the wider network together. It built specialized “controller” software for running all this hardware. It even built its own routing protocol, dubbed Firehose, for efficiently moving data across the network. “We couldn’t buy the hardware we needed to build a network of the size and speed we needed to build,” Vahdat says. “It just didn’t exist.”</p>
<p>The aim, Vahdat says, was twofold. A decade ago, the company’s network had grown so large, spanning so many machines, it needed a more efficient way of shuttling data between them all. Traditional gear wasn’t up to the task. But it also needed a way of cutting costs. Traditional gear was too expensive. So, rather than construct massively complex switches from scratch, it strung together enormous numbers of cheap commodity chips.</p>
<p>Google’s online empire is unusual. It is likely the largest on earth. But as the rest of the Internet expands, others are facing similar problems. Facebook has designed a similar breed of networking hardware and software. And so many other online operations are moving in a same direction, including Amazon and Microsoft. AT&amp;T, one of the world’s largest Internet providers, is now rebuilding its network in similar ways. “We’re not talking about it,” says Scott Mair, senior vice president of technology planning and engineering at AT&amp;T. “We’re doing it.”</p>
<p>Unlike Google and Facebook, the average online company isn’t likely to build its own hardware and software. But so many startups are now offering commercial technology that mimics The Google Way.</p>
<p>Basically, they’re fashioning software that lets companies build complex networks atop cheap “bare metal” switches, moving the complexity out of the hardware and into the software. People call this software-defined networking, or SDN, and it provides a more nimble way of building, expanding, and reshaping computer networks.</p>
<p>“It gives you agility, and it gives you scale,” says Mark Russinovich, who has helped build similar software at Microsoft. “If you don’t have this, you’re down to programming individual devices—rather than letting a smart controller do it for you.”</p>
<p>It’s a movement that’s overturning the business models of traditional network vendors such as Cisco, Dell, and HP. Vahdat says that Google now designs 100 percent of the networking hardware used inside its data centers, using contract manufacturers in Asia and other locations to build the actual equipment. That means it’s not buying from Cisco, traditionally the world’s largest networking vendor. But for the Ciscos of the world, the bigger threat is that so many others are moving down the same road as Google.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/10/19/google-data-center-gallery/comment-page-1/#comment-1328525</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 13 Jan 2015 16:25:44 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=14178#comment-1328525</guid>
		<description><![CDATA[Inside a Google data center
http://www.cablinginstall.com/articles/2015/01/inside-google-datacenter.html

Below, Joe Kava, VP of Google&#039;s data center operations, gives a tour inside a Google data center, and shares details about the security, sustainability and the core architecture of Google&#039;s infrastructure. &quot;A data center is the brains of the Internet, the engine of the Internet,&quot; 

Inside a Google data center VIDEO
https://www.youtube.com/watch?v=XZmGGAbHqa0]]></description>
		<content:encoded><![CDATA[<p>Inside a Google data center<br />
<a href="http://www.cablinginstall.com/articles/2015/01/inside-google-datacenter.html" rel="nofollow">http://www.cablinginstall.com/articles/2015/01/inside-google-datacenter.html</a></p>
<p>Below, Joe Kava, VP of Google&#8217;s data center operations, gives a tour inside a Google data center, and shares details about the security, sustainability and the core architecture of Google&#8217;s infrastructure. &#8220;A data center is the brains of the Internet, the engine of the Internet,&#8221; </p>
<p>Inside a Google data center VIDEO<br />
<a href="https://www.youtube.com/watch?v=XZmGGAbHqa0" rel="nofollow">https://www.youtube.com/watch?v=XZmGGAbHqa0</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/10/19/google-data-center-gallery/comment-page-1/#comment-1307141</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 04 Dec 2014 08:25:33 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=14178#comment-1307141</guid>
		<description><![CDATA[Google Open Sources Its Secret Weapon in Cloud Computing
http://www.wired.com/2014/06/google-kubernetes/

When Google engineers John Sirois, Travis Crawford, and Bill Farner left the internet giant and went to work for Twitter, they missed Borg.

Borg was the sweeping software system that managed the thousands of computer servers underpinning Google’s online empire. With Borg, Google engineers could instantly grab enormous amounts of computing power from across the company’s data centers and apply it to whatever they were building–whether it was Google Search or Gmail or Google Maps. As Sirois, Crawford, and Farner created new web services at Twitter, they longed for the convenience of this massive computing engine.

Unfortunately, Borg was one of those creations Google was loath to share with the outside world–a technological trade secret it saw as an important competitive advantage. In the end, urged by that trio of engineers, Twitter went so far as build its own version of the tool. But now, the next wave of internet companies has another way of expanding their operations to Google-like sizes. This morning, Google open sourced a software tool that works much like Borg, freely sharing this new creation with the world at large.

‘It’s a way of stitching together a collection of machines into, basically, a big computer.’


Releasing Kubernetes as a way of encouraging people to use these cloud computing services, known as Google Compute Engine and Google App Engine.

But the new tool isn’t limited to the Google universe. It also lets you oversee machines running on competing cloud services–from Amazon, say, or Rackspace–as well as inside private data centers. Yes, today’s cloud services already give you quick access to large numbers of virtual machines, but with Kubernetes, Google aims to help companies pool processing power more effectively from a wide variety of places. “It’s a way of stitching together a collection of machines into, basically, a big computer,” says Craig Mcluckie, a product manager for Google’s cloud services.]]></description>
		<content:encoded><![CDATA[<p>Google Open Sources Its Secret Weapon in Cloud Computing<br />
<a href="http://www.wired.com/2014/06/google-kubernetes/" rel="nofollow">http://www.wired.com/2014/06/google-kubernetes/</a></p>
<p>When Google engineers John Sirois, Travis Crawford, and Bill Farner left the internet giant and went to work for Twitter, they missed Borg.</p>
<p>Borg was the sweeping software system that managed the thousands of computer servers underpinning Google’s online empire. With Borg, Google engineers could instantly grab enormous amounts of computing power from across the company’s data centers and apply it to whatever they were building–whether it was Google Search or Gmail or Google Maps. As Sirois, Crawford, and Farner created new web services at Twitter, they longed for the convenience of this massive computing engine.</p>
<p>Unfortunately, Borg was one of those creations Google was loath to share with the outside world–a technological trade secret it saw as an important competitive advantage. In the end, urged by that trio of engineers, Twitter went so far as build its own version of the tool. But now, the next wave of internet companies has another way of expanding their operations to Google-like sizes. This morning, Google open sourced a software tool that works much like Borg, freely sharing this new creation with the world at large.</p>
<p>‘It’s a way of stitching together a collection of machines into, basically, a big computer.’</p>
<p>Releasing Kubernetes as a way of encouraging people to use these cloud computing services, known as Google Compute Engine and Google App Engine.</p>
<p>But the new tool isn’t limited to the Google universe. It also lets you oversee machines running on competing cloud services–from Amazon, say, or Rackspace–as well as inside private data centers. Yes, today’s cloud services already give you quick access to large numbers of virtual machines, but with Kubernetes, Google aims to help companies pool processing power more effectively from a wide variety of places. “It’s a way of stitching together a collection of machines into, basically, a big computer,” says Craig Mcluckie, a product manager for Google’s cloud services.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/10/19/google-data-center-gallery/comment-page-1/#comment-633492</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 30 Jun 2014 14:37:37 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=14178#comment-633492</guid>
		<description><![CDATA[Google’s Innovative Data Center Cooling Techniques
http://www.strategicdatacenter.com/52/google%E2%80%99s-innovative-data-center-cooling-techniques

In years gone by, data centers kept temperatures at 68 to 70 degrees Fahrenheit and narrow humidity ranges. According to a recent article in CIO.com, the American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE), the organization that issues de facto standards for data center climate, has moved the top recommended temperature range to 80.6° F and increased the peak humidity threshold as well.

This shift to higher temperatures and humidity levels has pushed manufacturers to expand the operating climate range of their equipment. It’s also made data center managers think about employing innovative cooling methods, such as using outside air, water and evaporative techniques.

Google drives down the cost and environmental impact of running data centers by designing and building its own facilities. The company employs “free-cooling” techniques like using outside air or reused water for cooling. Google claims its data centers use 50 percent less energy than the typical data center and are among the most efficient in the world.]]></description>
		<content:encoded><![CDATA[<p>Google’s Innovative Data Center Cooling Techniques<br />
<a href="http://www.strategicdatacenter.com/52/google%E2%80%99s-innovative-data-center-cooling-techniques" rel="nofollow">http://www.strategicdatacenter.com/52/google%E2%80%99s-innovative-data-center-cooling-techniques</a></p>
<p>In years gone by, data centers kept temperatures at 68 to 70 degrees Fahrenheit and narrow humidity ranges. According to a recent article in CIO.com, the American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE), the organization that issues de facto standards for data center climate, has moved the top recommended temperature range to 80.6° F and increased the peak humidity threshold as well.</p>
<p>This shift to higher temperatures and humidity levels has pushed manufacturers to expand the operating climate range of their equipment. It’s also made data center managers think about employing innovative cooling methods, such as using outside air, water and evaporative techniques.</p>
<p>Google drives down the cost and environmental impact of running data centers by designing and building its own facilities. The company employs “free-cooling” techniques like using outside air or reused water for cooling. Google claims its data centers use 50 percent less energy than the typical data center and are among the most efficient in the world.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/10/19/google-data-center-gallery/comment-page-1/#comment-580779</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 17 Jun 2014 08:15:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=14178#comment-580779</guid>
		<description><![CDATA[Google Cloud Platform Gets SSD Persistent Disks And HTTP Load Balancing
http://techcrunch.com/2014/06/16/google-cloud-platform-gets-ssd-persistent-disks-and-http-load-balancing/

Google’s I/O developer conference may just be a few days away, but that hasn’t stopped the company from launching a couple of new features for its Cloud Platform ahead of the event. As Google announced today, the Cloud Platform is getting two features that have long been on developers’ wish lists: HTTP load balancing and SSD-based persistent storage. Both of these features are now in limited preview 

Developers whose applications need the high number of input/output operations per second SSDs make possible can now get this feature for a flat fee of $0.325 per gigabyte per month. It’s worth noting that this is significantly more expensive than the $0.04 Google charges for regular persistent storage. Unlike Amazon Web Services, Google does not any extra fees for the actual input/output requests.

Amazon also offers SSD-based EC2 instances, but those do not feature any persistent storage.

While the standard persistent disks feature speeds of about 0.3 read and 1.5 write IOPS/GB, the SSD-based service gets up to 30 read and write IOPS/GB. 

As for the HTTP load balancing feature, Google says it can scale p to more than 1 million requests per second without any warm-up time. It supports content-based routing and Google especially notes that users can load balance across different regions

As of now, however, HTTP load balancing does not support the SSL protocol. Developers who want to use this feature will have to use Google’s existing protocol-based network load balancing system.]]></description>
		<content:encoded><![CDATA[<p>Google Cloud Platform Gets SSD Persistent Disks And HTTP Load Balancing<br />
<a href="http://techcrunch.com/2014/06/16/google-cloud-platform-gets-ssd-persistent-disks-and-http-load-balancing/" rel="nofollow">http://techcrunch.com/2014/06/16/google-cloud-platform-gets-ssd-persistent-disks-and-http-load-balancing/</a></p>
<p>Google’s I/O developer conference may just be a few days away, but that hasn’t stopped the company from launching a couple of new features for its Cloud Platform ahead of the event. As Google announced today, the Cloud Platform is getting two features that have long been on developers’ wish lists: HTTP load balancing and SSD-based persistent storage. Both of these features are now in limited preview </p>
<p>Developers whose applications need the high number of input/output operations per second SSDs make possible can now get this feature for a flat fee of $0.325 per gigabyte per month. It’s worth noting that this is significantly more expensive than the $0.04 Google charges for regular persistent storage. Unlike Amazon Web Services, Google does not any extra fees for the actual input/output requests.</p>
<p>Amazon also offers SSD-based EC2 instances, but those do not feature any persistent storage.</p>
<p>While the standard persistent disks feature speeds of about 0.3 read and 1.5 write IOPS/GB, the SSD-based service gets up to 30 read and write IOPS/GB. </p>
<p>As for the HTTP load balancing feature, Google says it can scale p to more than 1 million requests per second without any warm-up time. It supports content-based routing and Google especially notes that users can load balance across different regions</p>
<p>As of now, however, HTTP load balancing does not support the SSL protocol. Developers who want to use this feature will have to use Google’s existing protocol-based network load balancing system.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/10/19/google-data-center-gallery/comment-page-1/#comment-553111</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Wed, 11 Jun 2014 06:09:19 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=14178#comment-553111</guid>
		<description><![CDATA[Machine learning optimizes Google data centers&#039; PUE
http://www.cablinginstall.com/articles/2014/05/machine-learning-google-datacenters-pue.html

Did you know that Google has been calculating its data centers’ PUE data every five minutes for over five years, including 19 different variables such as cooling tower speed, processing water temperature, pump speed, outside air temperature, humidity, etc? 

&quot;neural networks are the latest way Google is slashing energy consumption from its data centers&quot;]]></description>
		<content:encoded><![CDATA[<p>Machine learning optimizes Google data centers&#8217; PUE<br />
<a href="http://www.cablinginstall.com/articles/2014/05/machine-learning-google-datacenters-pue.html" rel="nofollow">http://www.cablinginstall.com/articles/2014/05/machine-learning-google-datacenters-pue.html</a></p>
<p>Did you know that Google has been calculating its data centers’ PUE data every five minutes for over five years, including 19 different variables such as cooling tower speed, processing water temperature, pump speed, outside air temperature, humidity, etc? </p>
<p>&#8220;neural networks are the latest way Google is slashing energy consumption from its data centers&#8221;</p>
]]></content:encoded>
	</item>
</channel>
</rss>
