<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Amazon Cloud size and details</title>
	<atom:link href="http://www.epanorama.net/blog/2012/03/16/amazon-cloud-size-and-details/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.epanorama.net/blog/2012/03/16/amazon-cloud-size-and-details/</link>
	<description>All about electronics and circuit design</description>
	<lastBuildDate>Sat, 11 Apr 2026 08:03:43 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.9.14</generator>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/03/16/amazon-cloud-size-and-details/comment-page-2/#comment-1662613</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 03 Dec 2019 20:21:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=9549#comment-1662613</guid>
		<description><![CDATA[how global cloud platforms will offer the 1ms latency for 5G. AWS Wavelength promises to do it by extending your VPCs into Wavelength Zones, where you can run local EC2 instances and EBS volumes at the edge.

Announcing AWS Wavelength for delivering ultra-low latency applications for 5G
https://aws.amazon.com/about-aws/whats-new/2019/12/announcing-aws-wavelength-delivering-ultra-low-latency-applications-5g/

AWS Wavelength embeds AWS compute and storage services at the edge of telecommunications providers’ 5G networks, and provides seamless access to the breadth of AWS services in the region. AWS Wavelength enables you to build applications that serve mobile end-users and devices with single-digit millisecond latencies over 5G networks, like game and live video streaming, machine learning inference at the edge, and augmented and virtual reality.  

AWS Wavelength brings AWS services to the edge of the 5G network, minimizing the network hops and latency to connect to an application from a 5G device. Wavelength delivers a consistent developer experience across multiple 5G networks around the world]]></description>
		<content:encoded><![CDATA[<p>how global cloud platforms will offer the 1ms latency for 5G. AWS Wavelength promises to do it by extending your VPCs into Wavelength Zones, where you can run local EC2 instances and EBS volumes at the edge.</p>
<p>Announcing AWS Wavelength for delivering ultra-low latency applications for 5G<br />
<a href="https://aws.amazon.com/about-aws/whats-new/2019/12/announcing-aws-wavelength-delivering-ultra-low-latency-applications-5g/" rel="nofollow">https://aws.amazon.com/about-aws/whats-new/2019/12/announcing-aws-wavelength-delivering-ultra-low-latency-applications-5g/</a></p>
<p>AWS Wavelength embeds AWS compute and storage services at the edge of telecommunications providers’ 5G networks, and provides seamless access to the breadth of AWS services in the region. AWS Wavelength enables you to build applications that serve mobile end-users and devices with single-digit millisecond latencies over 5G networks, like game and live video streaming, machine learning inference at the edge, and augmented and virtual reality.  </p>
<p>AWS Wavelength brings AWS services to the edge of the 5G network, minimizing the network hops and latency to connect to an application from a 5G device. Wavelength delivers a consistent developer experience across multiple 5G networks around the world</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/03/16/amazon-cloud-size-and-details/comment-page-2/#comment-1438172</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 24 Sep 2015 09:31:27 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=9549#comment-1438172</guid>
		<description><![CDATA[Inside Amazon&#039;s Cloud Computing Infrastructure
http://slashdot.org/story/15/09/23/229253/inside-amazons-cloud-computing-infrastructure

As Sunday&#039;s outage demonstrates, the Amazon Web Services cloud is critical to many of its more than 1 million customers. Data Center Frontier looks at Amazon&#039;s cloud infrastructure, and how it builds its data centers. The company&#039;s global network includes at least 30 data centers, each typically housing 50,000 to 80,000 servers. &quot;We really like to keep the size to less than 100,000 servers per data center,&quot; 

Like Google and Facebook, Amazon also builds its own custom server, storage and networking hardware, working with Intel to produce processors 

Inside Amazon’s Cloud Computing Infrastructure
http://datacenterfrontier.com/inside-amazon-cloud-computing-infrastructure/

This week we’ll look at Amazon’s mighty cloud infrastructure, including how it builds its data centers and where they live (and why).

Lifting the Veil of Secrecy … A Bit

Amazon has historically been secretive about its data center operations, disclosing far less about its infrastructure than other hyperscale computing leaders such as Google, Facebook and Microsoft. That has begun to change in the last several years, as Amazon executives Werner Vogels and James Hamilton have opened up about the company’s data center operations at events for the developer community.

“There’s been quite a few requests from customers asking us to talk a bit about the physical layout of our data centers,” said Werner Vogels, VP and Chief Technology Office for Amazon, in a presentation at the AWS Summit Tel Aviv in July. “We never talk that much about it. So we wanted to lift up the secrecy around our networking and data centers.”

A key goal of these sessions is to help developers understand Amazon’s philosophy on redundancy and uptime. The company organizes its infrastructure into 11 regions, each containing a cluster of data centers. Each region contains multiple Availability Zones, providing customers with the option to mirror or back up key IT assets to avoid downtime. The “ripple effect” of outages whenever AWS experiences problems indicates that this feature remains underutilized.

Scale Drives Platform Investment

In its most recent quarter, the revenue for Amazon Web Services was growing at an 81 percent annual rate. That may not translate directly into a similar rate of infrastructure growth, but one thing is certain: Amazon is adding servers, storage and new data centers at an insane pace.

“Every day, Amazon enough new server capacity to support all of Amazon’s global infrastructure when it was a $7 billion annual revenue enterprise,” 

Amazon’s data center strategy is relentlessly focused on reducing cost, according to Vogels, who noted that the company has reduced prices 49 times since launching Amazon Web Services in 2006.

“We do a lot of infrastructure innovation in our data centers to drive cost down,” Vogels said. “We see this as a high-volume, low-margin business, and we’re more than happy to keep the margins where they are. And then if we have a lower cost base, we’ll hand money back to you.”

A key decision in planning and deploying cloud capacity is how large a data center to build. Amazon’s huge scale offers advantages in both cost and operations. Hamilton said most Amazon data centers house between 50,000 and 80,000 servers, with a power capacity of between 25 and 30 megawatts.

“It’s undesirable to have data centers that are larger than that due to what we call the ‘blast radius’,” said Vogels, noting the industry term for assessing risk based on a single destructive regional event. “A data center is still a unit of failure. The larger you built your data centers, the larger the impact such a failure could have. We really like to keep the size of data centers to less than 100,000 servers per data center.”

So how many servers does Amazon Web Services run? The descriptions by Hamilton and Vogels suggest the number is at least 1.5 million. Figuring out the upper end of the range is more difficult, but could range as high as 5.6 million, according to calculations by Timothy Prickett Morgan at the Platform. 

Amazon leases buildings from a number of wholesale data center providers

An interesting element of Amazon’s approach to data center development is that it has the ability to design and build its own power substations. Tha specialization is driven by the need for speed, rather than cost management.

“You save a tiny amount,” said Hamilton. “What’s useful is that we can build them much more quickly. Our growth rate is not a normal rate for utility companies. We did this because we had to. But it’s cool that we can do it.”

But as its operations grew, Amazon followed the lead of Google and began creating custom hardware for its data centers. This allows Amazon to fine-tune its servers, storage and networking gear to get the best bang for its buck, offering greater control over both performance and cost.

“Yes, we build our own servers,” said Vogels. “We could buy off the shelf, but they’re very expensive and very general purpose. So we’re building custom storage and servers to address these workloads. We’ve worked together with Intel to make household processors available that run at much higher clockrates. It allows us to build custom server types to support very specific workloads.”

Amazon offers several EC2 instance types featuring these custom chips, a souped-up version of the Xeon E5  processor based on Intel’s Haswell architecture and 22-nanometer process technology.

AWS uses designs its own software and hardware for its networking, which is perhaps the most challenging component of its infrastructure. Vogels said servers still account for the bulk of data center spending, but while servers and storage are getting cheaper, the cost of networking has gone up.

“The way most customers work is that an application runs in a single data center, and you work as hard as you can to make the data center as reliable as you can, and in the end you realize that about three nines (99.9 percent uptime) is all you’re going to get,”

“Building distributed development across multiple data centers, especially if they’re geographically further away, becomes really hard,”]]></description>
		<content:encoded><![CDATA[<p>Inside Amazon&#8217;s Cloud Computing Infrastructure<br />
<a href="http://slashdot.org/story/15/09/23/229253/inside-amazons-cloud-computing-infrastructure" rel="nofollow">http://slashdot.org/story/15/09/23/229253/inside-amazons-cloud-computing-infrastructure</a></p>
<p>As Sunday&#8217;s outage demonstrates, the Amazon Web Services cloud is critical to many of its more than 1 million customers. Data Center Frontier looks at Amazon&#8217;s cloud infrastructure, and how it builds its data centers. The company&#8217;s global network includes at least 30 data centers, each typically housing 50,000 to 80,000 servers. &#8220;We really like to keep the size to less than 100,000 servers per data center,&#8221; </p>
<p>Like Google and Facebook, Amazon also builds its own custom server, storage and networking hardware, working with Intel to produce processors </p>
<p>Inside Amazon’s Cloud Computing Infrastructure<br />
<a href="http://datacenterfrontier.com/inside-amazon-cloud-computing-infrastructure/" rel="nofollow">http://datacenterfrontier.com/inside-amazon-cloud-computing-infrastructure/</a></p>
<p>This week we’ll look at Amazon’s mighty cloud infrastructure, including how it builds its data centers and where they live (and why).</p>
<p>Lifting the Veil of Secrecy … A Bit</p>
<p>Amazon has historically been secretive about its data center operations, disclosing far less about its infrastructure than other hyperscale computing leaders such as Google, Facebook and Microsoft. That has begun to change in the last several years, as Amazon executives Werner Vogels and James Hamilton have opened up about the company’s data center operations at events for the developer community.</p>
<p>“There’s been quite a few requests from customers asking us to talk a bit about the physical layout of our data centers,” said Werner Vogels, VP and Chief Technology Office for Amazon, in a presentation at the AWS Summit Tel Aviv in July. “We never talk that much about it. So we wanted to lift up the secrecy around our networking and data centers.”</p>
<p>A key goal of these sessions is to help developers understand Amazon’s philosophy on redundancy and uptime. The company organizes its infrastructure into 11 regions, each containing a cluster of data centers. Each region contains multiple Availability Zones, providing customers with the option to mirror or back up key IT assets to avoid downtime. The “ripple effect” of outages whenever AWS experiences problems indicates that this feature remains underutilized.</p>
<p>Scale Drives Platform Investment</p>
<p>In its most recent quarter, the revenue for Amazon Web Services was growing at an 81 percent annual rate. That may not translate directly into a similar rate of infrastructure growth, but one thing is certain: Amazon is adding servers, storage and new data centers at an insane pace.</p>
<p>“Every day, Amazon enough new server capacity to support all of Amazon’s global infrastructure when it was a $7 billion annual revenue enterprise,” </p>
<p>Amazon’s data center strategy is relentlessly focused on reducing cost, according to Vogels, who noted that the company has reduced prices 49 times since launching Amazon Web Services in 2006.</p>
<p>“We do a lot of infrastructure innovation in our data centers to drive cost down,” Vogels said. “We see this as a high-volume, low-margin business, and we’re more than happy to keep the margins where they are. And then if we have a lower cost base, we’ll hand money back to you.”</p>
<p>A key decision in planning and deploying cloud capacity is how large a data center to build. Amazon’s huge scale offers advantages in both cost and operations. Hamilton said most Amazon data centers house between 50,000 and 80,000 servers, with a power capacity of between 25 and 30 megawatts.</p>
<p>“It’s undesirable to have data centers that are larger than that due to what we call the ‘blast radius’,” said Vogels, noting the industry term for assessing risk based on a single destructive regional event. “A data center is still a unit of failure. The larger you built your data centers, the larger the impact such a failure could have. We really like to keep the size of data centers to less than 100,000 servers per data center.”</p>
<p>So how many servers does Amazon Web Services run? The descriptions by Hamilton and Vogels suggest the number is at least 1.5 million. Figuring out the upper end of the range is more difficult, but could range as high as 5.6 million, according to calculations by Timothy Prickett Morgan at the Platform. </p>
<p>Amazon leases buildings from a number of wholesale data center providers</p>
<p>An interesting element of Amazon’s approach to data center development is that it has the ability to design and build its own power substations. Tha specialization is driven by the need for speed, rather than cost management.</p>
<p>“You save a tiny amount,” said Hamilton. “What’s useful is that we can build them much more quickly. Our growth rate is not a normal rate for utility companies. We did this because we had to. But it’s cool that we can do it.”</p>
<p>But as its operations grew, Amazon followed the lead of Google and began creating custom hardware for its data centers. This allows Amazon to fine-tune its servers, storage and networking gear to get the best bang for its buck, offering greater control over both performance and cost.</p>
<p>“Yes, we build our own servers,” said Vogels. “We could buy off the shelf, but they’re very expensive and very general purpose. So we’re building custom storage and servers to address these workloads. We’ve worked together with Intel to make household processors available that run at much higher clockrates. It allows us to build custom server types to support very specific workloads.”</p>
<p>Amazon offers several EC2 instance types featuring these custom chips, a souped-up version of the Xeon E5  processor based on Intel’s Haswell architecture and 22-nanometer process technology.</p>
<p>AWS uses designs its own software and hardware for its networking, which is perhaps the most challenging component of its infrastructure. Vogels said servers still account for the bulk of data center spending, but while servers and storage are getting cheaper, the cost of networking has gone up.</p>
<p>“The way most customers work is that an application runs in a single data center, and you work as hard as you can to make the data center as reliable as you can, and in the end you realize that about three nines (99.9 percent uptime) is all you’re going to get,”</p>
<p>“Building distributed development across multiple data centers, especially if they’re geographically further away, becomes really hard,”</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/03/16/amazon-cloud-size-and-details/comment-page-1/#comment-1438171</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 24 Sep 2015 09:22:59 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=9549#comment-1438171</guid>
		<description><![CDATA[Revealed: Why Amazon, Netflix, Tinder, Airbnb and co plunged offline
And the dodgy database at the heart of the crash is suffering again right now
http://www.theregister.co.uk/2015/09/23/aws_outage_explained/

Netflix, Tinder, Airbnb and other big names were crippled or thrown offline for millions of people when Amazon suffered what&#039;s now revealed to be a cascade of cock-ups.

On Sunday, Amazon Web Services (AWS), which powers a good chunk of the internet, broke down and cut off websites from people eager to stream TV, or hookup with strangers; thousands complained they couldn&#039;t watch Netflix, chat up potential partners, find a place to crash via Airbnb, memorize trivia on IMDb, and so on.

Today, it&#039;s emerged the mega-outage was caused by vital systems in one part of AWS taking too long to send information to another part that was needed by customers.&#039;

In technical terms, the internal metadata servers in AWS&#039;s DynamoDB database service were not answering queries from the storage systems within a particular time limit.

DynamoDB tables can be split into partitions scattered over many servers.

At about 0220 PT on Sunday, the metadata service was taking too long sending back answers to the storage servers. 

At that moment on Sunday, the levee broke: too many taxing requests hit the metadata servers simultaneously, causing them to slow down and not respond to the storage systems in time. This forced the storage systems to stop handling requests for data from customers, and instead retry their membership queries to the metadata service – putting further strain on the cloud.

It got so bad AWS engineers were unable to send administrative commands to the metadata systems.

Other services were hit by the outage: EC2 Auto Scaling, the Simple Queue Service, CloudWatch, and the AWS Console feature, suffered problems.]]></description>
		<content:encoded><![CDATA[<p>Revealed: Why Amazon, Netflix, Tinder, Airbnb and co plunged offline<br />
And the dodgy database at the heart of the crash is suffering again right now<br />
<a href="http://www.theregister.co.uk/2015/09/23/aws_outage_explained/" rel="nofollow">http://www.theregister.co.uk/2015/09/23/aws_outage_explained/</a></p>
<p>Netflix, Tinder, Airbnb and other big names were crippled or thrown offline for millions of people when Amazon suffered what&#8217;s now revealed to be a cascade of cock-ups.</p>
<p>On Sunday, Amazon Web Services (AWS), which powers a good chunk of the internet, broke down and cut off websites from people eager to stream TV, or hookup with strangers; thousands complained they couldn&#8217;t watch Netflix, chat up potential partners, find a place to crash via Airbnb, memorize trivia on IMDb, and so on.</p>
<p>Today, it&#8217;s emerged the mega-outage was caused by vital systems in one part of AWS taking too long to send information to another part that was needed by customers.&#8217;</p>
<p>In technical terms, the internal metadata servers in AWS&#8217;s DynamoDB database service were not answering queries from the storage systems within a particular time limit.</p>
<p>DynamoDB tables can be split into partitions scattered over many servers.</p>
<p>At about 0220 PT on Sunday, the metadata service was taking too long sending back answers to the storage servers. </p>
<p>At that moment on Sunday, the levee broke: too many taxing requests hit the metadata servers simultaneously, causing them to slow down and not respond to the storage systems in time. This forced the storage systems to stop handling requests for data from customers, and instead retry their membership queries to the metadata service – putting further strain on the cloud.</p>
<p>It got so bad AWS engineers were unable to send administrative commands to the metadata systems.</p>
<p>Other services were hit by the outage: EC2 Auto Scaling, the Simple Queue Service, CloudWatch, and the AWS Console feature, suffered problems.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/03/16/amazon-cloud-size-and-details/comment-page-1/#comment-1376732</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 16 Apr 2015 13:49:12 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=9549#comment-1376732</guid>
		<description><![CDATA[Amazon CTO destealths to throw light on AWS data centre design
All-black outfit to explain it ain&#039;t just about the white boxen
http://www.theregister.co.uk/2015/04/16/aws_data_centre_architecture_amazon_cto_werner_vogels/

Ask Amazon about its AWS data centres and you’ll get get this response: Amazon doesn’t talk about its data centres. Until its chief technology officer pitches in, that is.

AWS is becoming to many what Windows once was: a platform for doing business. It has started as something that let enterprises free themselves from the yoke of owning their own servers. Now it’s letting them deliver new services.

Another important group of customers are internet pure plays who, again, don’t need to set up and run their own servers and infrastructure. These include everything from fundraising efforts such as Just Giving, a service for individuals and groups to raise funds online, to Omnifore – music streaming infrastructure employed by SiriusXM and Sony Music Unlimited.

Just Giving and Omnifore sit between their customers and the raw AWS infrastructure that for the non-techie is still difficult to knit together. What they rely on are hundreds of thousands of servers and network switches that Amazon has custom designed and built, working with Intel and others. Servers are grouped into, yes, data centres, which comprise Amazon’s Availability Zones, which themslves in turn make up regions – there’s 10 regions and 28 zones.

Each region is comprised of two or more Availability Zones and each zone has at least one data centre. No one data centre serves two Availability Zones, while some Zones are served by up to six data centres. Data centres must also be on different power grids, so no one power outage can take down a Zone.

Availability Zones are AWS’s way to circumvent the problems of back-up and latency that traditionally dog wide-area computing. Traditionally, a company in, say, New York might have disaster back-up in New Jersey, with data also replicated across the US in Los Angeles.

However, according to Vogels: “This old replication was deemed not fit for scale.  One transaction is 1-2 milliseconds and replicating that will cost you 100 milliseconds. Then if you have to do fail over from New York to LA it’s a nightmare – failing back is even worse. Integrating a failed system into a live system is a nightmare.&quot;

To solve latency, Amazon built Availability Zones on groups of tightly coupled data centres. Each data centre in a Zone is less than 25 microseconds away from its sibling and packs 102Tbps of networking.

As for those data centres, each is capped at 80,000 servers – determined to be the upper optimum limit – but contains at least 50,000. Servers are built by Amazon, working with Intel and other manufacturers. These aren’t cheap-o boxen, according to Vogels.

“Don’t think these are white-box servers,” 

Amazon has also stripped out unwanted features that come with standard, off-the-shelf servers. Gone are audio chips and power transformers]]></description>
		<content:encoded><![CDATA[<p>Amazon CTO destealths to throw light on AWS data centre design<br />
All-black outfit to explain it ain&#8217;t just about the white boxen<br />
<a href="http://www.theregister.co.uk/2015/04/16/aws_data_centre_architecture_amazon_cto_werner_vogels/" rel="nofollow">http://www.theregister.co.uk/2015/04/16/aws_data_centre_architecture_amazon_cto_werner_vogels/</a></p>
<p>Ask Amazon about its AWS data centres and you’ll get get this response: Amazon doesn’t talk about its data centres. Until its chief technology officer pitches in, that is.</p>
<p>AWS is becoming to many what Windows once was: a platform for doing business. It has started as something that let enterprises free themselves from the yoke of owning their own servers. Now it’s letting them deliver new services.</p>
<p>Another important group of customers are internet pure plays who, again, don’t need to set up and run their own servers and infrastructure. These include everything from fundraising efforts such as Just Giving, a service for individuals and groups to raise funds online, to Omnifore – music streaming infrastructure employed by SiriusXM and Sony Music Unlimited.</p>
<p>Just Giving and Omnifore sit between their customers and the raw AWS infrastructure that for the non-techie is still difficult to knit together. What they rely on are hundreds of thousands of servers and network switches that Amazon has custom designed and built, working with Intel and others. Servers are grouped into, yes, data centres, which comprise Amazon’s Availability Zones, which themslves in turn make up regions – there’s 10 regions and 28 zones.</p>
<p>Each region is comprised of two or more Availability Zones and each zone has at least one data centre. No one data centre serves two Availability Zones, while some Zones are served by up to six data centres. Data centres must also be on different power grids, so no one power outage can take down a Zone.</p>
<p>Availability Zones are AWS’s way to circumvent the problems of back-up and latency that traditionally dog wide-area computing. Traditionally, a company in, say, New York might have disaster back-up in New Jersey, with data also replicated across the US in Los Angeles.</p>
<p>However, according to Vogels: “This old replication was deemed not fit for scale.  One transaction is 1-2 milliseconds and replicating that will cost you 100 milliseconds. Then if you have to do fail over from New York to LA it’s a nightmare – failing back is even worse. Integrating a failed system into a live system is a nightmare.&#8221;</p>
<p>To solve latency, Amazon built Availability Zones on groups of tightly coupled data centres. Each data centre in a Zone is less than 25 microseconds away from its sibling and packs 102Tbps of networking.</p>
<p>As for those data centres, each is capped at 80,000 servers – determined to be the upper optimum limit – but contains at least 50,000. Servers are built by Amazon, working with Intel and other manufacturers. These aren’t cheap-o boxen, according to Vogels.</p>
<p>“Don’t think these are white-box servers,” </p>
<p>Amazon has also stripped out unwanted features that come with standard, off-the-shelf servers. Gone are audio chips and power transformers</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Leonore</title>
		<link>https://www.epanorama.net/blog/2012/03/16/amazon-cloud-size-and-details/comment-page-1/#comment-957477</link>
		<dc:creator><![CDATA[Leonore]]></dc:creator>
		<pubDate>Thu, 21 Aug 2014 07:23:42 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=9549#comment-957477</guid>
		<description><![CDATA[Simply want to say your article is as astonishing.
The clarity for your put up is just spectacular 
and that i could think you are knowledgeable on this subject.
Well along with your permission allow me to grab your 
feed to keep updated with coming near near post.
Thank you a million and please carry on the gratifying 
work.]]></description>
		<content:encoded><![CDATA[<p>Simply want to say your article is as astonishing.<br />
The clarity for your put up is just spectacular<br />
and that i could think you are knowledgeable on this subject.<br />
Well along with your permission allow me to grab your<br />
feed to keep updated with coming near near post.<br />
Thank you a million and please carry on the gratifying<br />
work.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/03/16/amazon-cloud-size-and-details/comment-page-1/#comment-334041</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 17 Apr 2014 07:57:41 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=9549#comment-334041</guid>
		<description><![CDATA[AWS bins elastic compute units, adopts virtual CPUs
Customers tired of wrapping their heads around odd computing power metric
http://www.theregister.co.uk/2014/04/17/aws_bins_elastic_compute_units_adopts_virtual_cpus/

Gartner analyst Kyle Hilgendorf has spotted something very interesting: Amazon Web Services seems to have stopped rating cloud servers based on EC2 compute units (ECUs), its proprietary metric of computing power.

ECUs were an odd metric, as they were based on “... the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor … equivalent to an early-2006 1.7 GHz Xeon”.

Elastic Compute Units have been replaced with processor information and clock speed.]]></description>
		<content:encoded><![CDATA[<p>AWS bins elastic compute units, adopts virtual CPUs<br />
Customers tired of wrapping their heads around odd computing power metric<br />
<a href="http://www.theregister.co.uk/2014/04/17/aws_bins_elastic_compute_units_adopts_virtual_cpus/" rel="nofollow">http://www.theregister.co.uk/2014/04/17/aws_bins_elastic_compute_units_adopts_virtual_cpus/</a></p>
<p>Gartner analyst Kyle Hilgendorf has spotted something very interesting: Amazon Web Services seems to have stopped rating cloud servers based on EC2 compute units (ECUs), its proprietary metric of computing power.</p>
<p>ECUs were an odd metric, as they were based on “&#8230; the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor … equivalent to an early-2006 1.7 GHz Xeon”.</p>
<p>Elastic Compute Units have been replaced with processor information and clock speed.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/03/16/amazon-cloud-size-and-details/comment-page-1/#comment-333632</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 10 Apr 2014 13:02:22 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=9549#comment-333632</guid>
		<description><![CDATA[AWS could &#039;consider&#039; ARM CPUs, RISC-as-a-service
CTO Vogels says &#039;power management for ARM is considered state of the art&#039;
http://www.theregister.co.uk/2014/04/10/aws_could_consider_arm_cpus_riscasaservice/

Amazon Web Services (AWS) chief technology officer Werner Vogels believes the cloudy colossus could, in the future consider using ARM CPUs, or even offering RISC-as-a-service to help those on legacy platforms enjoy cloud elasticity.

AWS, he added, is “always looking for efficiency” and as “power management for ARM is considered state of the art” it makes sense to consider it.]]></description>
		<content:encoded><![CDATA[<p>AWS could &#8216;consider&#8217; ARM CPUs, RISC-as-a-service<br />
CTO Vogels says &#8216;power management for ARM is considered state of the art&#8217;<br />
<a href="http://www.theregister.co.uk/2014/04/10/aws_could_consider_arm_cpus_riscasaservice/" rel="nofollow">http://www.theregister.co.uk/2014/04/10/aws_could_consider_arm_cpus_riscasaservice/</a></p>
<p>Amazon Web Services (AWS) chief technology officer Werner Vogels believes the cloudy colossus could, in the future consider using ARM CPUs, or even offering RISC-as-a-service to help those on legacy platforms enjoy cloud elasticity.</p>
<p>AWS, he added, is “always looking for efficiency” and as “power management for ARM is considered state of the art” it makes sense to consider it.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/03/16/amazon-cloud-size-and-details/comment-page-1/#comment-56407</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Fri, 24 Jan 2014 11:45:58 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=9549#comment-56407</guid>
		<description><![CDATA[Amazon&#039;s &#039;schizophrenic&#039; open source selfishness scares off potential talent, say insiders
Moles blame Bezos for paltry code sharing
http://www.theregister.co.uk/2014/01/22/amazon_open_source_investigation/

Amazon is one of the most technically influential companies operating today – but you wouldn&#039;t know it, thanks to a dearth of published research papers and negligible code contributions to the open-source projects it relies on.

This, according to multiple insiders, is becoming a problem. The corporation is described as a &quot;black hole&quot; because improvements and fixes for the open-source software it uses rarely see the light of day. And, we&#039;re told, that policy of secrecy comes right from the top – and it&#039;s driving talent into the arms of its rivals.

This secretiveness, &quot;comes from Jeff,&quot; claimed another source. &quot;It&#039;s passed down in HR training and policy. It&#039;s all very clear.&quot;

Though a select few are permitted to give public talks, when they do, they disclose far less information about their company&#039;s technology than their peers.

&quot;Amazon behaves a lot like a classified military agency,&quot; explained another ex-Amazonian

Multiple sources have speculated to us that Amazon&#039;s secrecy comes from Jeff Bezos&#039; professional grounding in the financial industry, where he worked in trading systems. This field is notoriously competitive and very, very hush-hush. That may have influenced his thoughts about how open Amazon should operate, as does his role in a market where he competes with retail giants such as Walmart.

But one contact argued that a taciturn approach may not be appropriate for the advanced technology Amazon has developed for its large-scale cloud computing business division, Amazon Web Services.

&quot;In the Amazon case, there is a particular schizophrenia between retail and technology, and the retail culture dominates,&quot; explained the source. &quot;Retail frugality is all about secrecy because margins are so small so you can&#039;t betray anything – secrecy is a dominant factor in the Amazon culture.

&quot;It&#039;s a huge cost to the company.&quot;]]></description>
		<content:encoded><![CDATA[<p>Amazon&#8217;s &#8216;schizophrenic&#8217; open source selfishness scares off potential talent, say insiders<br />
Moles blame Bezos for paltry code sharing<br />
<a href="http://www.theregister.co.uk/2014/01/22/amazon_open_source_investigation/" rel="nofollow">http://www.theregister.co.uk/2014/01/22/amazon_open_source_investigation/</a></p>
<p>Amazon is one of the most technically influential companies operating today – but you wouldn&#8217;t know it, thanks to a dearth of published research papers and negligible code contributions to the open-source projects it relies on.</p>
<p>This, according to multiple insiders, is becoming a problem. The corporation is described as a &#8220;black hole&#8221; because improvements and fixes for the open-source software it uses rarely see the light of day. And, we&#8217;re told, that policy of secrecy comes right from the top – and it&#8217;s driving talent into the arms of its rivals.</p>
<p>This secretiveness, &#8220;comes from Jeff,&#8221; claimed another source. &#8220;It&#8217;s passed down in HR training and policy. It&#8217;s all very clear.&#8221;</p>
<p>Though a select few are permitted to give public talks, when they do, they disclose far less information about their company&#8217;s technology than their peers.</p>
<p>&#8220;Amazon behaves a lot like a classified military agency,&#8221; explained another ex-Amazonian</p>
<p>Multiple sources have speculated to us that Amazon&#8217;s secrecy comes from Jeff Bezos&#8217; professional grounding in the financial industry, where he worked in trading systems. This field is notoriously competitive and very, very hush-hush. That may have influenced his thoughts about how open Amazon should operate, as does his role in a market where he competes with retail giants such as Walmart.</p>
<p>But one contact argued that a taciturn approach may not be appropriate for the advanced technology Amazon has developed for its large-scale cloud computing business division, Amazon Web Services.</p>
<p>&#8220;In the Amazon case, there is a particular schizophrenia between retail and technology, and the retail culture dominates,&#8221; explained the source. &#8220;Retail frugality is all about secrecy because margins are so small so you can&#8217;t betray anything – secrecy is a dominant factor in the Amazon culture.</p>
<p>&#8220;It&#8217;s a huge cost to the company.&#8221;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/03/16/amazon-cloud-size-and-details/comment-page-1/#comment-22374</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Fri, 20 Dec 2013 11:39:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=9549#comment-22374</guid>
		<description><![CDATA[AWS imposes national borders on Cloudland
&#039;Geo Restriction&#039; feature keeps foreign undesirables away from your content
http://www.theregister.co.uk/2013/12/20/aws_imposes_national_borders_on_cloudland/

Amazon Web Services (AWS) has drawn up borders within its cloud with a new &#039;Geostriction&#039; feature for its CloudFront service.

CloudFront is AWS&#039; content distribution offering and speeds downloads for all manner of media, often by locating it closer to users.
Click here

AWS can now ensure that only the users you want – or at least those within borders you desire – can access content served from CloudFront thanks to the Geo Restriction feature. Amazon says the new feature means “you can choose the countries where you want Amazon CloudFront to deliver your content.”

You might wish to do that, AWS says, because “licensing requirements restrict some media customers from delivering movies outside a single country.”]]></description>
		<content:encoded><![CDATA[<p>AWS imposes national borders on Cloudland<br />
&#8216;Geo Restriction&#8217; feature keeps foreign undesirables away from your content<br />
<a href="http://www.theregister.co.uk/2013/12/20/aws_imposes_national_borders_on_cloudland/" rel="nofollow">http://www.theregister.co.uk/2013/12/20/aws_imposes_national_borders_on_cloudland/</a></p>
<p>Amazon Web Services (AWS) has drawn up borders within its cloud with a new &#8216;Geostriction&#8217; feature for its CloudFront service.</p>
<p>CloudFront is AWS&#8217; content distribution offering and speeds downloads for all manner of media, often by locating it closer to users.<br />
Click here</p>
<p>AWS can now ensure that only the users you want – or at least those within borders you desire – can access content served from CloudFront thanks to the Geo Restriction feature. Amazon says the new feature means “you can choose the countries where you want Amazon CloudFront to deliver your content.”</p>
<p>You might wish to do that, AWS says, because “licensing requirements restrict some media customers from delivering movies outside a single country.”</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/03/16/amazon-cloud-size-and-details/comment-page-1/#comment-22373</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 17 Dec 2013 10:07:08 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=9549#comment-22373</guid>
		<description><![CDATA[Amazon Web Services Blog
VM Import / Export for Linux
http://aws.typepad.com/aws/2013/12/vm-import-export-for-linux.html

If you have invested in the creation of &quot;golden&quot; Linux images suitable for your on-premises environment, I have some good news for you.]]></description>
		<content:encoded><![CDATA[<p>Amazon Web Services Blog<br />
VM Import / Export for Linux<br />
<a href="http://aws.typepad.com/aws/2013/12/vm-import-export-for-linux.html" rel="nofollow">http://aws.typepad.com/aws/2013/12/vm-import-export-for-linux.html</a></p>
<p>If you have invested in the creation of &#8220;golden&#8221; Linux images suitable for your on-premises environment, I have some good news for you.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
