Amazon Cloud size and details

How many servers does it take to power Amazon’s huge cloud computing operation? Like many large Internet companies, Amazon doesn’t disclose details of its infrastructure.

Estimate: Amazon Cloud Backed by 450,000 Servers article tells that a researcher from Accenture Technology Labs estimates that Amazon Web Services is using at least 454,400 servers in seven data center hubs around the globe. Huan Liu analyzed Amazon’s EC2 compute service using internal and external IP addresses and published the results in Amazon EC2 has 454,400 servers, read on to find out more…. blog article.

Liu then applied an assumption of 64 blade servers per rack – four 10U chassis, each holding eight blades – to arrive at the estimate. He estimates that Amazon has 5,030 racks in northern Virginia, or about 70 percent of the estimated total of 7,100 racks for AWS.

Photos from a 2011 presentation by AWS Distinguished Engineer James Hamilton (codered in A Look Inside Amazon’s Data Centers) show 1U “pizza box” rackmount servers rather than blades, but it’s not known if that was a recent depiction of Amazon’s infrastructure.

This is not the first analysis of Amazon’s scale. Take also look at analyses from Randy Bias and Guy Rosen. It clearly places the size of Amazon’s structure well above the hosting providers that have publicly disclosed their server counts, but still well below the estimated 900,000 servers in Google’s data center network.


One potential benefit of using a public cloud, such as Amazon EC2, is that a cloud could be more efficient. In theory, a cloud can support many users, and it can potentially achieve a much higher server utilization through aggregating a large number of demands. But is it really the case in practice? If you ask a cloud provider, they most likely would not tell you their CPU utilization. Host server CPU utilization in Amazon EC2 cloud tells one story of CPU utilization in Amazon EC2 cloud and how it was measured. The research used technique that allows us to measure the CPU utilization in public clouds by measuring how hot the CPU gets. Most modern Intel and AMD CPUs are all equipped with an on-board thermal sensor already (one per core), and in Amazon EC2 researcher was able to successfully read these temperature sensors.

Host server CPU utilization in Amazon EC2 cloud article tells that among the servers measured, the average CPU utilization in EC2 over the whole week is 7.3%. The reason why utilization so so low it that, because an instance is so cheap, people never turn it off.


  1. Tomi Engdahl says:

    Inside Amazon’s Cloud Computing Infrastructure

    As Sunday’s outage demonstrates, the Amazon Web Services cloud is critical to many of its more than 1 million customers. Data Center Frontier looks at Amazon’s cloud infrastructure, and how it builds its data centers. The company’s global network includes at least 30 data centers, each typically housing 50,000 to 80,000 servers. “We really like to keep the size to less than 100,000 servers per data center,”

    Like Google and Facebook, Amazon also builds its own custom server, storage and networking hardware, working with Intel to produce processors

    Inside Amazon’s Cloud Computing Infrastructure

    This week we’ll look at Amazon’s mighty cloud infrastructure, including how it builds its data centers and where they live (and why).

    Lifting the Veil of Secrecy … A Bit

    Amazon has historically been secretive about its data center operations, disclosing far less about its infrastructure than other hyperscale computing leaders such as Google, Facebook and Microsoft. That has begun to change in the last several years, as Amazon executives Werner Vogels and James Hamilton have opened up about the company’s data center operations at events for the developer community.

    “There’s been quite a few requests from customers asking us to talk a bit about the physical layout of our data centers,” said Werner Vogels, VP and Chief Technology Office for Amazon, in a presentation at the AWS Summit Tel Aviv in July. “We never talk that much about it. So we wanted to lift up the secrecy around our networking and data centers.”

    A key goal of these sessions is to help developers understand Amazon’s philosophy on redundancy and uptime. The company organizes its infrastructure into 11 regions, each containing a cluster of data centers. Each region contains multiple Availability Zones, providing customers with the option to mirror or back up key IT assets to avoid downtime. The “ripple effect” of outages whenever AWS experiences problems indicates that this feature remains underutilized.

    Scale Drives Platform Investment

    In its most recent quarter, the revenue for Amazon Web Services was growing at an 81 percent annual rate. That may not translate directly into a similar rate of infrastructure growth, but one thing is certain: Amazon is adding servers, storage and new data centers at an insane pace.

    “Every day, Amazon enough new server capacity to support all of Amazon’s global infrastructure when it was a $7 billion annual revenue enterprise,”

    Amazon’s data center strategy is relentlessly focused on reducing cost, according to Vogels, who noted that the company has reduced prices 49 times since launching Amazon Web Services in 2006.

    “We do a lot of infrastructure innovation in our data centers to drive cost down,” Vogels said. “We see this as a high-volume, low-margin business, and we’re more than happy to keep the margins where they are. And then if we have a lower cost base, we’ll hand money back to you.”

    A key decision in planning and deploying cloud capacity is how large a data center to build. Amazon’s huge scale offers advantages in both cost and operations. Hamilton said most Amazon data centers house between 50,000 and 80,000 servers, with a power capacity of between 25 and 30 megawatts.

    “It’s undesirable to have data centers that are larger than that due to what we call the ‘blast radius’,” said Vogels, noting the industry term for assessing risk based on a single destructive regional event. “A data center is still a unit of failure. The larger you built your data centers, the larger the impact such a failure could have. We really like to keep the size of data centers to less than 100,000 servers per data center.”

    So how many servers does Amazon Web Services run? The descriptions by Hamilton and Vogels suggest the number is at least 1.5 million. Figuring out the upper end of the range is more difficult, but could range as high as 5.6 million, according to calculations by Timothy Prickett Morgan at the Platform.

    Amazon leases buildings from a number of wholesale data center providers

    An interesting element of Amazon’s approach to data center development is that it has the ability to design and build its own power substations. Tha specialization is driven by the need for speed, rather than cost management.

    “You save a tiny amount,” said Hamilton. “What’s useful is that we can build them much more quickly. Our growth rate is not a normal rate for utility companies. We did this because we had to. But it’s cool that we can do it.”

    But as its operations grew, Amazon followed the lead of Google and began creating custom hardware for its data centers. This allows Amazon to fine-tune its servers, storage and networking gear to get the best bang for its buck, offering greater control over both performance and cost.

    “Yes, we build our own servers,” said Vogels. “We could buy off the shelf, but they’re very expensive and very general purpose. So we’re building custom storage and servers to address these workloads. We’ve worked together with Intel to make household processors available that run at much higher clockrates. It allows us to build custom server types to support very specific workloads.”

    Amazon offers several EC2 instance types featuring these custom chips, a souped-up version of the Xeon E5 processor based on Intel’s Haswell architecture and 22-nanometer process technology.

    AWS uses designs its own software and hardware for its networking, which is perhaps the most challenging component of its infrastructure. Vogels said servers still account for the bulk of data center spending, but while servers and storage are getting cheaper, the cost of networking has gone up.

    “The way most customers work is that an application runs in a single data center, and you work as hard as you can to make the data center as reliable as you can, and in the end you realize that about three nines (99.9 percent uptime) is all you’re going to get,”

    “Building distributed development across multiple data centers, especially if they’re geographically further away, becomes really hard,”

  2. Tomi Engdahl says:

    how global cloud platforms will offer the 1ms latency for 5G. AWS Wavelength promises to do it by extending your VPCs into Wavelength Zones, where you can run local EC2 instances and EBS volumes at the edge.

    Announcing AWS Wavelength for delivering ultra-low latency applications for 5G

    AWS Wavelength embeds AWS compute and storage services at the edge of telecommunications providers’ 5G networks, and provides seamless access to the breadth of AWS services in the region. AWS Wavelength enables you to build applications that serve mobile end-users and devices with single-digit millisecond latencies over 5G networks, like game and live video streaming, machine learning inference at the edge, and augmented and virtual reality.

    AWS Wavelength brings AWS services to the edge of the 5G network, minimizing the network hops and latency to connect to an application from a 5G device. Wavelength delivers a consistent developer experience across multiple 5G networks around the world


Leave a Comment

Your email address will not be published. Required fields are marked *