Amazon Cloud size and details

How many servers does it take to power Amazon’s huge cloud computing operation? Like many large Internet companies, Amazon doesn’t disclose details of its infrastructure.

Estimate: Amazon Cloud Backed by 450,000 Servers article tells that a researcher from Accenture Technology Labs estimates that Amazon Web Services is using at least 454,400 servers in seven data center hubs around the globe. Huan Liu analyzed Amazon’s EC2 compute service using internal and external IP addresses and published the results in Amazon EC2 has 454,400 servers, read on to find out more…. blog article.

Liu then applied an assumption of 64 blade servers per rack – four 10U chassis, each holding eight blades – to arrive at the estimate. He estimates that Amazon has 5,030 racks in northern Virginia, or about 70 percent of the estimated total of 7,100 racks for AWS.

Photos from a 2011 presentation by AWS Distinguished Engineer James Hamilton (codered in A Look Inside Amazon’s Data Centers) show 1U “pizza box” rackmount servers rather than blades, but it’s not known if that was a recent depiction of Amazon’s infrastructure.

This is not the first analysis of Amazon’s scale. Take also look at analyses from Randy Bias and Guy Rosen. It clearly places the size of Amazon’s structure well above the hosting providers that have publicly disclosed their server counts, but still well below the estimated 900,000 servers in Google’s data center network.


One potential benefit of using a public cloud, such as Amazon EC2, is that a cloud could be more efficient. In theory, a cloud can support many users, and it can potentially achieve a much higher server utilization through aggregating a large number of demands. But is it really the case in practice? If you ask a cloud provider, they most likely would not tell you their CPU utilization. Host server CPU utilization in Amazon EC2 cloud tells one story of CPU utilization in Amazon EC2 cloud and how it was measured. The research used technique that allows us to measure the CPU utilization in public clouds by measuring how hot the CPU gets. Most modern Intel and AMD CPUs are all equipped with an on-board thermal sensor already (one per core), and in Amazon EC2 researcher was able to successfully read these temperature sensors.

Host server CPU utilization in Amazon EC2 cloud article tells that among the servers measured, the average CPU utilization in EC2 over the whole week is 7.3%. The reason why utilization so so low it that, because an instance is so cheap, people never turn it off.


  1. dog says:

    This info is priceless. When can I find out more?

  2. Vicenta Ground says:

    It is in reality a nice and useful piece of info. I am happy that you shared this helpful info with us. Please keep us up to date like this. Thank you for sharing.

  3. web hosting says:

    Can I host our facebook fan page also?

  4. Tomi Engdahl says:

    Adrian Cockcroft reveals how Netflix is driving Amazon to the limit

    Interview Moving pictures to the cloud

    The plan required changes to Netflix’s underpinnings, which are all based on Amazon Web Services, where Netflix has some 10,000 machines – the company is, he thinks, one of Amazon’s top 10 cloud customers.

    “We are driving Amazon’s technology to the limit,” he says.

    One of the technical challenges in developing the UK service was geolocation, to ensure that the content offered to subscribers was licensed for the territory they logged in from. The Canadian service Netflix began with could be built as an extension of the US service. Its Latin American offering, which was the first the company calls truly international, couldn’t; besides geolocation it also needed a Spanish language interface and subtitles.

    All of that, says Cockcroft, is running out of one Amazon region, the eastern US. Europe required a slightly different approach. The new service being offered in the UK is based in Ireland.

    “We figured out how to split it so we could run out of more than one region,” says Cockcroft. “So we could launch in AWS Ireland, which runs the UK and Ireland. Some services we run as islands, making separate copies, and others are global.”

  5. Tomi Engdahl says:

    Amazon’s Secretive Cloud Carries 1 Percent of the Internet

    Amazon’s cloud computing infrastructure is growing so fast that it’s silently becoming a core piece of the internet.

    They found that one-third of the several million users in the study visited a website that uses Amazon’s infrastructure each day.

    Most people still think of Amazon as the internet’s giant shopping mall — a purveyor of gadgets, books and movies — but it’s quietly become “a massive utility” that is either on the sending or receiving end of 1 percent of all of the internet traffic in North America, says Craig Labovitz, a well-known internet researcher and co-founder of DeepField.

    “The number of websites that would now break if Amazon were to go down, and the growing pervasiveness of Amazon behind the scenes, is really quite impressive.”

    Amazon introduced its first cloud service, the Elastic Compute Cloud, in 2006, basing it on the technology it had developed in-house while building up It’s now caught on as a quick way for companies to spin up servers without actually having to set up their own computers. Amazon now sells even more data center resources — storage, databases and search-indexing, for example — as cloud services.

    It’s popular with companies that see big spikes and drops in computing demand.

    But a big question remains: How big is Amazon’s cloud? How many servers power its data centers? The company didn’t respond to a request for comment on this story — like many cloud providers it considers this type of information a proprietary secret. But there are a few clues out there.

    Last month Accenture’s Huan Liu did a bit of internet sleuthing and came up with a guess: 445,000. That number could be high. Gartner researcher Lydia Leong estimates that Amazon’s cloud business was $1 billion in 2011, more than five times the size of its closest competitor, Rackspace. Last week Rackspace Chief Technology Officer John Engates was happy to tell us how many servers he has in his data centers: 80,000. But only 23 percent ($189 million) of Rackspace’s 2011 business was in the cloud. That implies that Rackspace could do the same amount of cloud business as Amazon with maybe 100,000 servers.

    Amazon itself has lobbed out some impressive-sounding tidbits about its cloud.

    For example, the company stored 762 billion objects in its S3 storage cloud last year

    Amazon says that “Every day Amazon Web Services adds enough new capacity to support all of’s global infrastructure through the company’s first 5 years, when it was a $2.76 billion annual revenue enterprise.”

    The company operates several data centers — it calls them “availability zones” — in Virginia, the West Coast, Singapore, Tokyo and Europe and, clearly, they have been growing fast in the past few years.

    Amazon’s business is growing even faster than most people realize

    Amazon has increased the number of IP addresses assigned to servers in those data centers more than fivefold in the past two years — from just over a quarter-million IP addresses in February 2010 to more than 1.7 million last month.

  6. Cloud Computing Trends says:

    Steve Jobs knew that Cloud Computing Trends would prevail -

  7. Tomi Engdahl says:

    Revealed: Inside super-soaraway Pinterest’s virtual data centre
    How to manage a cloud with 410TB of cupcake pictures

    It’s every startup’s dream: to be growing faster than Facebook without having to build a Facebook-sized server farm.

    Pinterest is an online picture pinboard for organising your favourite snaps and sharing them.

    Speaking at the AWS Summit in New York earlier this month, Ryan Park, operations and infrastructure leader at Pinterest, gave a sneak peek into the Pinterest data centre, which runs on the AWS cloud.

    According to ComScore data cited by Park in his presentation, Pinterest had 17.8 million monthly unique visitors as February came to a close.

    Among other things, the Pinterest pinboard uses Amazon’s S3 object storage to keep the photos and videos that its millions of users have uploaded. Between August last year and February this year, Pinterest has grown its capacity on S3 by a factor of 10, and server capacity on the EC2 compute cloud is up by nearly a factor of three, according to Park, from about 75,000 instance-hours to around 220,000.

    There are 150 high-core EC2 instances that run the Python web application servers that power Pinterest, which has deployed the Django framework for its web app. Traffic is balanced across these 150 instances using Amazon’s Elastic Load Balancer service. Park says that the ELB service has a “great API” that allows Pinterest to programmatically add capacity to the Python-Django cluster and also take virtual machines offline that way if they are not behaving or need to be tweaked.

    The Pinterest data centre on the AWS cloud also has 35 other EC2 instances running various other web services that are part of the pinboard site, and it also has another 90 high-memory EC2 instances that are used for memcached and Redis key-value stores for hot data, thereby lightening the load on the backend database.

    There are another 60 EC2 instances running various Pinterest auxiliary services, including logging, data analysis, operational tools, application development, search, and other tasks. For data analysis, Pinterest is using the Elastic MapReduce Hadoop cluster service from Amazon. This costs a few hundred dollars a month, which is cheaper than having two engineers babysit a real Hadoop cluster, explained Park.

    The Pinterest setup has a MySQL database cluster that runs on 70 master nodes on 70 standard EC2 instances plus another 70 slave database instances in different AWS availability zones for redundancy.

    The S3 file storage currently has 8 billion objects in it, which weigh in at 410TB.

    Initially, like any other data centre manager, Pinterest went out and provisioned its web server farm to be able to meet peak capacity and then have 25 per cent or so head room on top of that for crazy spikes

    Pinterest turned on the autoscaling feature of EC2, allowing AWS to automatically dial up and down instances with some headroom built in:

    The average reduction in web server instances using autoscaling was 40 per cent over the course of a single day, and because CPU-time is money on AWS, it saves about 40 per cent for the web server farm.

    At the peak, Pinterest is spending $52 per hour to support its web farm, and late at night when no one is using the site too much, they are spending around $15 an hour for the web farm, said Park.

    Pinterest has created a watchdog service to work with Elastic Load Balancer to make sure it is never more than a few EC2 instances shy of safe capacity for reserved and on-demand instances.

  8. Tomi Engdahl says:

    Can Amazon become the biggest platform peddler in the world?
    Crazy, but not impossible

    Comment How long will it be before Amazon Web Services is the largest provider of raw server capacity in the world, outselling the tier one server makers who actually peddle boxes directly or through the channel to customers?

    Retail giant and cloud computing juggernaut Amazon is cagey about exactly how well or poorly its Amazon Web Services cloud is doing financially, and it says even less about how much server, storage, and switching infrastructure it has dedicated to the task of supporting the hundreds of thousands of customers that are running development and production applications on the AWS cloud.

    But the company does plunk AWS into an “Other” category in its quarterly financials, so we can all sit around and make our estimates of how large or small the revenue stream from the 30 services that comprise the AWS stack is.

    In the quarter ended in March, Amazon had $13.2bn in revenues, up 33.8 per cent, but the Other category, which includes the AWS cloud, marketing and promotion, sales generated through co-marketer affiliate sites, and interest and other fees from Amazon-branded credit cards, posted sales of $550m, up 60.8 per cent.

    While that is a tiny portion of Amazon’s overall sales, if the AWS cloud is the bulk of that revenue – and everyone thinks that it is – then it is a very large converged platform reseller in its own right. However, in typical retail fashion, Amazon probably doesn’t make very much money on the EC2 compute, S3 and EBS storage, and other database, middleware, and networking services in the aggregate.

    If you think of AWS as a startup, this makes sense. And if you also see that Amazon founder Jeff Bezos seems to take great pleasure disrupting markets and doing so without making very much money, you will then realize that Amazon may not care if it only makes pennies on the dollar with AWS.

    If AWS is like any other popular service or server vendors out there, its growth is limited by its addressable market, and the rate of increase would have downshifted again in 2010 and 2011. But even with that assumption, we still think AWS accounted for $695m in sales in 2010 and $1.23bn in 2011. And if the curve follows in 2012, it should break through around $2.1bn this year.

    To put AWS into perspective, HP’s Business Critical Systems division, which peddles Integrity, Superdome, and NonStop machines running HP-UX, NonStop, and OpenVMS, has a run rate of around $2bn in the trailing twelve months. Back in the summer of 2011, IBM reckoned there were only 10,000 HP Integrity server accounts and they drove a total of $2.8bn in spending on servers, systems software, upgrades, and various services such as maintenance.

    Depending on who you ask, the Sparc/Solaris base has around 30,000 customers and generated around $2.3bn in sales in 2011

    IBM’s Power-AIX base, which has been growing at the expense of Sun, might have 100,000 customers at this point (Big Blue has never provided any numbers for this) and accounted for $5.2bn in revenues in 2011, according to Gartner, not including services.

    IBM has something on the order of 4,000 unique mainframe customers (with more than twice that many installations), who spend somewhere between $3bn and $4bn a year, depending on what point in the System z product cycle IBM is at.

    Collectively, Windows platforms and Linux platforms utterly dwarf these platform numbers, but they don’t all come from the same vendor like the AWS platform does

    And two or three years from now – and perhaps earlier – AWS as a platform, could rival any single vendor’s system platform.

  9. Tomi Engdahl says:

    Look out, Amazon Cloud! HP’s on the warpath

    Amazon’s cloud earned the bookseller an estimated $1.08bn in the first nine months of last year – up 70.4 per cent compared to 2010 – while the amount of data Amazon holds hit 762 billion objects, more than doubling last year.

    The HP Cloud Compute free beta ended on Thursday with a rash of 40 partners announcing that they are all now available on or in support of HP’s cloud – Rightscale, ActiveState, CloudBees, Dome9, EnterpriseDB and others. The period of construction and half-priced sign-ups is finished. Now it’s down to business.

    The HP Cloud Compute free beta ended on Thursday with a rash of 40 partners announcing that they are all now available on or in support of HP’s cloud – Rightscale, ActiveState, CloudBees, Dome9, EnterpriseDB and others. The period of construction and half-priced sign-ups is finished. Now it’s down to business.

    HP Cloud Compute uses the open-source cloud architecture called OpenStack

  10. Berenice Manozca says:

    Hmm it seems like your website ate my first comment (it was super long) so I guess I’ll just sum it up what I wrote and say, I’m thoroughly enjoying your blog. I as well am an aspiring blog blogger but I’m still new to the whole thing. Do you have any tips for beginner blog writers? I’d certainly appreciate it.

  11. Marvis Vanbenthuyse says:

    Good day! I know this is kinda off topic but I’d figured I’d ask. Would you be interested in exchanging links or maybe guest writing a blog post or vice-versa? My website goes over a lot of the same subjects as yours and I feel we could greatly benefit from each other. If you are interested feel free to send me an e-mail. I look forward to hearing from you! Superb blog by the way!

    • tomi says:

      Thank you for your feedback.

      Your blog seems to have only one post at the moment.
      I would like to see more material on your side before starting any plans for exchanging material.

  12. Tomi Engdahl says:

    Amazon offers cut-price support to make sure your Cloud stays up
    Various levels of certainty it won’t rain on your parade

    Cloud gorilla Amazon Web Services has revamped its technical support services for its various heavenly compute and storage infrastructure while at the same time tweaking the packaging of those support services.

    With the price cuts announced by Amazon today, that argument will get that much harder to make, even if it turns out to be true in many cases. In other cases, such as with startups with no IT admin staff and no desire to have one, no matter what Amazon is charging for tech support and no matter how poor it might be by comparison to a top-notch internal staff, they don’t have any other practical option rather than to rely on a hoster or cloud provider and to pay for tech support, so the comparison is moot.

    Under the new support plans, all customers who sign up for AWS services are automatically signed up for basic support, which has been expanded with 24×7 access to customer service (by either email or phone) for billing and account issues as well as tech support when virty systems are misbehaving. The basic support service gives customers best practices guides and technical FAQs as well as access to AWS Developer Forums, which Amazon’s engineers moderate, and the AWS Service Health Dashboard. This basic support tier is free.

    What used to be called the bronze support level on AWS is now known as developer support, and it costs $49 a month

  13. Tomi Engdahl says:

    Power Outage Affects Amazon Customers

    A power outage at an Amazon Web Services data center in northern Virginia last night knocked some customers offline. Among the sites affected were Heroku, Pinterest, Quora and HootSuite, along with a host of smaller sites. Amazon confirmed the power outage on its Service Health Dashboard, but did not offer details on the root cause of the power outage.

    The outage affected only one availability zone, the US-East-1 Region. The downtime led to the usual Twitter trash-talking about how major sites should spread their infrastructure across multiple Amazon availability zones, rather than relying on a single zone. Heroku indicated that its recovery efforts included shifting workloads to other availability zones.

    But the outage was the third significant downtime in the last 14 months for the US-East-1 region, which is Amazon’s oldest availability zone and resides in a data center in Ashburn, Virginia.

    While Amazon has multiple avalability zones, IP address research by Huan Li suggests that the majority of Amazon Web Services customers are concentrated in the US East region. Li estimates that Amazon has 5,030 racks in northern Virginia, or about 70 percent of the estimated total of 7,100 racks for AWS.

  14. Tomi Engdahl says:

    Amazon cloud outage takes down Netflix, Instagram, Pinterest, & more

    An outage of Amazon’s Elastic Compute Cloud in North Virginia has taken down Netflix, Pinterest, Instagram, and other services. According to numerous Twitter updates and our own checks, all three services are unavailable as of Friday evening at 9:10 p.m. PT.

    With the critical Amazon outage, which is the second this month, we wouldn’t be surprised if these popular services started looking at other options, including Rackspace, SoftLayer, Microsoft’s Azure, and Google’s just-introduced Compute Engine. Some of Amazon’s biggest EC2 outages occurred in April and August of last year.

  15. tomi says:

    An Amazon Web Services data center in northern Virginia lost power Friday night during an electrical storm, causing downtime for numerous customers — including Netflix, which uses an architecture designed to route around problems at a single availability zone. The same data center suffered a power outage two weeks ago and had connectivity problems earlier on Friday.


  16. Lauretta Skar says:

    After research a couple of of the weblog posts on your website now, and I really like your manner of blogging. I bookmarked it to my bookmark web site list and shall be checking again soon. Pls check out my site as well and let me know what you think.

  17. Tomi says:

    IaaS providers: how to select the right company for your cloud,0

    There’s more choice than ever when it comes to IaaS providers, how does a company negotiate the minefield

    While Amazon isn’t the only player when it comes to IaaS it is by far the biggest and this gives it a sense of gravity, pulling more customers into it. But there are other vendors too. In the last few months Google and Microsoft has joined the IaaS party, with Windows Azure and Google’s Compute Engine offering organisations similar products to Amazon’s.

    And that’s not forgetting that other hosting providers (such as Rackspace) and telcos (BT, Telefonica, et al) are itching to sweep up customers with their public cloud offerings. With research firm Gartner predicting by 2016 the global IaaS market will grow to $24.4 billion, many more vendors will join and create such an array of choice, it can be hard to know where to start.

  18. Tomi says:

    Netflix revealed today that they’ve released Chaos Monkey, an open source Amazon Web Service testing tool that will randomly turn off instances in Auto Scaling Groups. ‘We have found that the best defense against major unexpected failures is to fail often. By frequently causing failures, we force our services to be built in a way that is more resilient.


  19. Cyril Ryle says:

    Thank you, I’ve just been searching for info about this subject for a long time and yours is the greatest I’ve discovered till now. However, what about the conclusion? Are you certain in regards to the supply?

  20. Tomi Engdahl says:

    Microsoft Azure vs. Amazon Web Services: Which is Best For You?

    Microsoft Azure and Amazon Web Services offer programmers a lot of opportunities, but there are issues that need to be considered firs

    How often do programmers get to choose the technology underlying their work? Oftentimes, an executive or committee higher-up does the selecting, and we’re forced to live with it.

    One such example is cloud hosting. In a large company, an executive committee might choose from any number of offerings, including Amazon Web Services or Microsoft Azure. With a small startup, the ultimate decision might involve fewer people, but the programmer still isn’t the only voice in the room.

    I’ve always stayed as far from Azure as possible, because I didn’t want the cloud choice to have an impact on my programming.

    My goal was to determine two things. First, how much would vendor choice impact my programming? Second, how difficult would my life become if my employer decided to stop using that particular cloud vendor in midstream and switch to a different one altogether?

    For the first test, I’m building an application that will run on Amazon EC2. I’m using Eclipse with Amazon’s own ASW Tools for Eclipse. This is a plugin for Eclipse that lets you create an EC2 instance right from within Eclipse, which I did.

    Several companies out there provide vendor-agnostic management tools for managing your clouds. One of their claims to fame is that you can use their management console to launch instances on different platforms (Amazon, Rackspace, and others) without having to make adjustments for the particular platform.

    That’s a big sell to the business managers, because it gives them a warm squishy feeling that they’re avoiding vendor lock-in.

    However, in that situation we’re not talking software and Web applications: we’re talking management scripts for deploying instances. And that’s a huge difference.

    But there’s hope: new cloud standards also include developer-oriented APIs. For example, OpenStack (which was created in part by Rackspace) includes APIs for things like uploading objects (such as images) into containers. Even so, that’s of little help to us for our current project on Amazon.

    Clearly Amazon’s API forces us into a vendor-lock-in situation. But if we recognize that fact going in, and if we’re okay with it (which you might not be), then the APIs are available.

    We haven’t even gotten to Microsoft’s Azure yet, but Amazon’s API hasn’t proven too difficult to use. On top of that, there’s a chance you might not end up fully locked into Amazon, thanks to Google.

    Taming the Microsoft Beast

    Now that we’ve dug down into Amazon’s API and determined some of the implications, let’s head over to Microsoft. When you sign up with Azure, you get an interface that has similar functionality as Amazon’s platform. You can allocate new instances of servers, and you can even choose from some Linux varieties to run on these servers.

    If you look at the Azure API docs for different languages, you’ll see many mentions of messaging (say that five times fast), whereby your different server instances can communicate with each other

    Also, so we don’t go comparing apples to oranges, I’ll point out that Amazon also has a Queue service (you can read about it here). They also have a similar service called Simple Notification Service, which you can read about here. Note that Amazon’s messaging services all use the same RESTful interface, as I already described.

    In general, the API for using Azure from Java isn’t too complicated. Like Amazon, Microsoft offers a set of tools that help write Java code for Azure;
    The APIs for working with Azure aren’t difficult to use.

    But what about standards? Can you port them? Underneath those Java calls are REST calls to the Azure servers. Those REST calls are completely different from Amazon’s, and, for that matter, anyone else’s. Just like Amazon, Microsoft has created its own API. The difference with Amazon is that others have attempted to implement the same API, treating Amazon as a standard.

    So right now it’s sounding like the score is tied: from a strict programming perspective, both companies have their own RESTful API, and their own libraries for using the API. The moment you start using either, you’re locked in for the most part.

  21. Tomi Engdahl says:

    Python slithers up Amazon’s Beanstalk

    Python has become the newest language welcomed into the Amazon’s cloud fold, through the Amazon Web Services’ Elastic Beanstalk.

    The cloud giant today announced that Python applications are now supported on Elastic Beanstalk – along with PHP, Java and Microsoft’s family .NET.

    The news smooths the way for the DJango and Flask rapid and light-weight application development frameworks for Python apps to get an easier Amazon fluffing.

    Elastic Beanstalk automatically deploys applications by taking care of capacity provisioning, load balancing, auto-scaling and health monitoring.

    The command line tool manages all these by deploying your applications across Amazon components such as Simple Cloud Storage and Elastic Load Balancing. Of course, that also means you become more dependent on the Amazon fabric.

  22. Tomi Engdahl says:

    Amazon tries to freeze out tape with cheap ‘n’ cloudy Glacier
    Cloud giant rolls over earthly archives

    Amazon is digging deeper into the enterprise with a data back-up and archival service designed to help kill off tape.

    The cloud provider has just launched Glacier, which it says takes the headache out of digital archiving and delivers “extremely low” cost storage.

    Glacier has been built on the Amazon storage, management and security infrastructure and is being offered as a low-cost cloudy alternative to building or paying for expensive services using traditional storage technologies – particularly tape.

    Enter Amazon, with its disk and server-based system and pay-as-you-go consumption. Glacier starts at $0.01 per gigabyte for a month, with further charges for data requests and transfers.

  23. Tomi Engdahl says:

    Amazon launches Glacier cloud storage, hopes enterprise will go cold on tape use

    The tapeless Glacier service sees Amazon Web Services target on-premise tape systems with a redundant cloud storage technology, though to win business it will have to battle enterprise concerns about the stability of its cloud.

  24. Tomi Engdahl says:

    Amazon Glacier: Archival Storage for One Penny Per GB Per Month

    I’m going to bet that you (or your organization) spend a lot of time and a lot of money archiving mission-critical data. No matter whether you’re currently using disk, optical media or tape-based storage, it’s probably a more complicated and expensive process than you’d like which has you spending time maintaining hardware, planning capacity, negotiating with vendors and managing facilities.


    If so, then you are going to find our newest service, Amazon Glacier, very interesting. With Glacier, you can store any amount of data with high durability at a cost that will allow you to get rid of your tape libraries and robots and all the operational complexity and overhead that have been part and parcel of data archiving for decades.

    Glacier will store your data with high durability (the service is designed to provide average annual durability of 99.999999999% per archive). Behind the scenes, Glacier performs systematic data integrity checks and heals itself as necessary with no intervention on your part. There’s plenty of redundancy and Glacier can sustain the concurrent loss of data in two facilities.

    At this point you may be thinking that this sounds just like Amazon S3, but Amazon Glacier differs from S3 in two crucial ways.

    First, S3 is optimized for rapid retrieval (generally tens to hundreds of milliseconds per request). Glacier is not (we didn’t call it Glacier for nothing). With Glacier, your retrieval requests are queued up and honored at a somewhat leisurely pace. Your archive will be available for downloading in 3 to 5 hours.

    Each retrieval request that you make to Glacier is a called a job. You can poll Glacier to see if your data is available, or you can ask it to send a notification to the Amazon SNS topic of your choice when the data is available. You can then access the data via HTTP GET requests, including byte range requests. The data will remain available to you for 24 hours.

    Retrieval requests are priced differently, too. You can retrieve up to 5% of your average monthly storage, pro-rated daily, for free each month. Beyond that, you are charged a retrieval fee starting at $0.01 per Gigabyte (see the pricing page for details). So for data that you’ll need to retrieve in greater volume more frequently, S3 may be a more cost-effective service.

  25. Tomi Engdahl says:

    Is There a Landmine Hidden in Amazon’s Glacier?

    On Tuesday, Amazon unveiled a new online storage service known as Glacier. It’s called Glacier because it deals in “cold storage” — i.e., the long-term storage of things like medical records or financial documents that you may need to archive for regulatory services.

    This storage is “cold” because you don’t access it very often — or very quickly. It’s the stuff you might normally put on tape in a vault somewhere.

    It’s much cheaper than other storage services, but it’s also much slower.

    How much cheaper? That’s a very good question.

    Glacier’s pricing model has some people worrying. The cost of retrieving data is quite different from the cost of storing it.

    It will also take three to five hours to prepare an archive for downloading, which will also deter misuse of the service. Presumably, Amazon powers off the hardware until it’s needed.

  26. Tomi Engdahl says:

    Demystifying Amazon’s Cloud Player

    Moving your digital music files from your old computer to your phone to your laptop to your new computer used to be a lengthy and annoying process, especially for consumers with thousands of tracks from different music sources. Now, with tech companies offering “cloud,” or Web server-based, storage solutions for music, you can theoretically access files from any device with an Internet connection.

    But for most consumers, the concept of cloud storage and music “matching” services are still confusing, even as these services aim to streamline your music-listening experience.

    The service is now more comparable to iTunes Match, Apple’s cloud-based service, which similarly scans and matches non-iTunes music files on up to 10 devices.

    The Cloud Player will also store and play the music you’ve purchased via Amazon

    I found Amazon’s Cloud Player easy to use, despite the fact that I admittedly didn’t “get” scan-and-match services before.

    Amazon’s Cloud Player app is free to download, and users can buy or upload up to 250 songs from their computers to the Cloud Player at no charge. After that, the service costs $25 a year, the same price Apple charges for iTunes Match.

    The Amazon MP3 store sells more than 20 million songs, compared with the iTunes catalog of 28 million song

    I also downloaded the Cloud Player app onto Amazon’s Kindle Fire tablet, and from there was able to wirelessly stream music to a Sonos speaker.

    I also experienced some delays in song starts when trying to play matched iTunes songs from the Cloud side of the Amazon app.

    But once you’ve downloaded the purchased songs into your Cloud Player app, you can listen to them later, even if you don’t have a network connection.

    A third player worth noting here is Google Play, which lets you keep up to 20,000 songs in a Google cloud “locker” for no charge. But Google Play requires you to upload all of the songs to this digital locker yourself, since Google doesn’t yet have the rights to scan and create a match of your library for you

  27. Tomi Engdahl says:

    Amazon Opens Marketplace for … Virtual Computers

    Amazon has a split personality.

    On the one hand, it’s an online retailer and marketplace where you can buy stuff like books, DVDs, games, and gardening tools. On the other, it’s a massive “cloud” service where you can rent virtual servers and storage and other tools for building and hosting your own online software applications.

    That may seem like an odd mix. But it works out quite well for the company. Each personality dominates its particular market, and the two have a conveniently symbiotic relationship.

    Now, as if to highlight this split personality, Amazon has unveiled a new service that combines its two halves in a new way. On Wednesday, the company introduced an online marketplace where you can buy and sell virtual servers. It’s called the Amazon EC2 Reserved Instance Marketplace.

    f you’ve bought some virtual servers on Amazon’s EC2 service that you don’t need, you can offload them. And if you need extra servers, you can buy them on the cheap.

    “I often tell people that cloud computing is equal parts technology and business model,” reads a blog post from Amazon man Jeff Barr. “If you have excess capacity, you can list it on the marketplace and sell it to someone who needs additional capacity.”

  28. Tomi Engdahl says:

    Amazon launches its own mobile Maps API
    Available in beta for developers

    ONLINE RETAILER Amazon is preparing to move away from Google Maps by launching its own mobile Maps API for developers.

    “The Amazon Maps API makes it easy for you to integrate mapping functionality into apps that run on the all-new Kindle Fire and Kindle Fire HD,” Amazon said in its blog post. “These new devices will also support location-based services through the android.location API.”

    Because the API is not based on a proprietary mapping service provided by Amazon, the retailer has partnered with Nokia Maps to power the service on the Kindle Fire and Kindle Fire HD tablets that it announced earlier this month.

    Amazon said its Maps API is available now in beta

  29. Tomi Engdahl says:

    Nasdaq takes stock in AWS cloud

    The Nasdaq OMX stock exchange has struck a deal with Amazon Web Services (AWS) that will allow traders to store records and customer emails in the cloud.

    The information will be stored using Nasdaq’s FinQloud platform, which was built by AWS, and kept at the firm’s datacentre in Virginia, United States.

    Companies in the financial services industry are typically required to retain trading information for seven years, which can result in huge storage bills.

    AWS has also stressed that all of its traders’ data will be passed through an encryption system to ensure the FinCloud platform meets financial regulators’ stringent requirements.

  30. Mckinley Haflett says:

    you might have an awesome weblog below! would you prefer to make some invite posts on my weblog?

  31. Teachers Day Quotes In English says:

    Fantastic goods from you, man. I have understand your stuff previous to and you are just too wonderful. I really like what you’ve acquired here, certainly like what you are saying and the way in which you say it. You make it enjoyable and you still care for to keep it smart. I cant wait to read much more from you. This is really a terrific web site.

  32. Tomi Engdahl says:

    Amazon EC2 cloud does Windows Server 2012
    Cloudy OS-on-Elastic Beanstalk action

    Microsoft wants you to build your clouds out of the new Windows Server 2012 operating system, and it wants you to run applications on its Windows Azure cloud, too. But if it can’t get you to go all-Redmond, then it will settle for you running Windows Server 2012 on Amazon’s cloudy competition, the EC2 compute cloud and the Elastic Beanstalk autoscaling feature for it.

    For its part, Amazon just wants you to run any and all operating systems and applications on its cloud, and it particularly likes Windows Server because it charges a hefty premium for EC2 images that run it. The premium is nearly a factor of two for most instance types.

    Amazon was touting the fact that if you are new to this whole cloud computing thing, or to AWS in particular, that you could take Windows Server 2012 out for a spin on the EC2 “micro instances” that are free to use for a year – provided you are a new customer. These freebie cloudy server slices can run Windows or Linux.

    As Amazon explained in its announcement, the new Windows Server 2012, which launched in September as Microsoft’s “cloud OS,” runs on any EC2 virtual machine instance and when it does, it is the AWS stack that is the cloud OS and Windows is relegated to being a runtime environment for applications.

    Amazon has ginned up a slew of Amazon Machine Images (AMIs) based on Windows Server 2012 Standard Edition in nineteen different languages. Amazon has also created Windows Server 2012 AMI variants tweaked for supporting SQL Server 2008 in its R1 and R2 releases and SQL Server 2012 in their Express, Web, and Standard editions. All regions of the AWS cloud can run Windows Server 2012.

    Amazon has also announced that its Elastic Beanstalk autoscaling function for the EC2 compute cloud can automagically scale up and down Windows Server 2012 images running the .NET framework and supporting .NET runtimes.

    With the combination of EC2 and Elastic Beanstalk, you can create applications in Microsoft’s Visual Studio 2012 and .NET 4.5 framework and dispatch them to the AWS cloud from Visual Studio. You can also deploy those apps using the AWS Management Console.

  33. Tomi Engdahl says:

    Amazon cloud spin-off ‘inevitable,’ says Oppenheimer
    You’ve got to segregate to accumulate – or do you?

    Analysis Amazon Web Services must be spun-off from its mothership to prevent it losing out on cloud customers, one analyst has argued – but a break-up could render AWS toothless, says The Reg.

    The spin-off was recommended by Oppenheimer analyst Tim Horan in a report published on Monday.

    “In our view, we believe an ultimate spin-off of AWS is inevitable due to its channel conflicts and the need to gain scale,” Horan wrote. “We see the business as extremely valuable on a standalone basis, possibly even operating as a REIT,” the insider’s acronym for a real estate investment trust

    The crack in this bout of crystal-ball gazing is that Oppenheimer is an investment firm that by nature likes predictable cash above everything else, and Amazon’s leader Jeff Bezos is a mercurial, ambitious figure who has demonstrated time and time again a love for risky, long-term projects*.

    For Amazon, a spin-off would break its close links with its only major technology supplier and could lead to unpredictable rises in its own costs – a nasty proposition when the company has a penchant for running at 1 or 2 per cent margins.

  34. Tomi Engdahl says:

    AWS OpsWorks – Flexible Application Management in the Cloud Using Chef

    AWS OpsWorks features an integrated management experience for the entire application lifecycle including resource provisioning, configuration management, application deployment, monitoring, and access control. It will work with applications of any level of complexity and is independent of any particular architectural pattern.

    AWS OpsWorks was designed to simplify the process of managing the application lifecycle without imposing arbitrary limits or forcing you to work within an overly constrained model. You have the freedom to design your application stack as you see fit.

  35. Tomi Engdahl says:

    Linux Foundation takes over Xen, enlists Amazon in war to rule the cloud
    Xen virtualization gains support from Amazon, Cisco, Google, Intel, and more.

    The Linux Foundation has taken control of the open source Xen virtualization platform and enlisted a dozen industry giants in a quest to be the leading software for building cloud networks.

    The 10-year-old Xen hypervisor was formerly a community project sponsored by Citrix, much as the Fedora operating system is a community project sponsored by Red Hat.

    Amazon is perhaps the most significant name on that list in regard to Xen. The Amazon Elastic Compute Cloud is likely the most widely used public infrastructure-as-a-service (IaaS) cloud, and it is built on Xen virtualization. Rackspace’s public cloud also uses Xen.

  36. Django Chomikuj says:

    Great posting. I became reviewing regularly this site and I am impressed! Incredibly handy data especially the remaining period :) My spouse and i manage like data much. I’m looking for this kind of details for some time. Thanks and involving fortune.

  37. Vitamins says:

    I do trust all of the ideas you’ve presented on your post. They are very convincing and can certainly work. Nonetheless, the posts are very short for novices. May you please lengthen them a little from next time? Thank you for the post.

  38. baby games online says:

    Brain Games,Puzzles,Brain Exercises Science Project at

  39. Tomi Engdahl says:

    Amazon lashes Nvidia’s GRID GPU to its cloud: But can it run Crysis?
    It’ll certainly cost you top dollar at Jeff’s Virtualization Palace

    Amazon has chugged Nvidia’s new virtualized GPU technology to spin-up a new class of rentable instances for 3D visualizations and other graphics-heavy applications.

    The “G2″ instances, announced by Amazon on Tuesday, is another nod by a major provider to the value of Nvidia’s GRID GPU adapters, which launched in 2012.

    These GRID boards provide hardware virtualization of Nvidia’s Kepler architecture GPUs, which include an H.264 video encoding engine. The instances will give developers 1,536 parallel processing cores to play with for video creation, graphics-intensive streaming, and “other server-side graphics workloads requiring massive parallel processing power,” according to the PR bumf.

    It also supports DirectX, OpenGL, Cuda, and OpenCL applications, demonstrating Amazon’s lack of allegiance to any particular technology in its quest to tear as much money away from on-prem spend as possible.

  40. Tomi Engdahl says:

    How Amazon is building substations, laying fiber and generally doing everything to keep cloud costs down

    Amazon Web Services VP and Distinguished Engineer James Hamilton explained during a session at the AWS re:Invent conference how the cloud provider keeps costs as low as possible and innovation as high as possible. It’s all about being the master of your infrastructure.

    If there’s anyone still left wondering how it is that large cloud providers can keep on rolling out new features and lowering their prices even when no one is complaining about them, Amazon Web Services Vice President and Distinguished Engineer James Hamilton spelled out the answer in one word during a presentation Thursday at the company’s re:Invent conference: Scale.

    Scale is the enabler of everything at AWS. To express the type of scale he’s talking about, Hamilton noted an oft-cited statistic — that AWS adds enough capacity every day to power the entirety of when was it was a $7 billion business. “In fact, it’s way bigger than that,” he added. “It’s way bigger than that every day.”

    Seven days a week, the global cycle of building, testing, shipping, racking and deploying AWS’s computing gear “just keeps cranking,” Hamilton said.

    “the best thing you can do for innovation is drive the risk of failure down and make the cycle quicker.”

    The cost of delivering a service at scale is all in the infrastructure. The software engineering costs “round to zero,” Hamilton said.

    That’s why he thinks he’s seen more innovation in the world of computing in the past 5 years than in the previous 20 years — because companies like Amazon, Facebook, Google and Microsoft have gotten so good at scaling their infrastructure.

    Like Google and Facebook, Amazon is designing its own servers, and they’re all specialized for the particular service they’re running. Back in the day, Hamilton used to lobby for just having one or two SKUs from a server vendor in order to minimize complexity, but times have changed. Once you master the process, going straight to server manufacturers with custom designs can lop 30 percent off the price right away, not to mention the improved performance and faster turnaround time.

    Today, “You’d be stealing from your customers not to optimize your hardware,” he said.

    The densest storage servers you can buy commercially today come from Quanta, and a rack full of them would weigh in at about three-quarters of a ton. “We have a far denser design — it is more than a ton,” Hamilton said.

    Networking is a huge problem today as prices keep rising and force many companies to oversubscribe their data center bandwidth, Hamilton said. In many typical scenarios, only 1 out of every 60 servers could transmit at full bandwidth at one time, and that works fine because they’re usually not doing much.

    So, like Google and, soon, Facebook, AWS is building its own networking gear and its own protocol stack. “We’ve taken over the network,” Hamilton said. “… Suddenly we can do what we normally do.”

    AWS also builds its own electric substations, which is not a minor undertaking considering that each one requires between 50 and 100 megawatts to really be efficient, Hamilton explained. “Fifty megawatts — that’s a lot of servers,” he added “… [M]any tens of thousands.”

    the company even has firmware engineers whose job it is to rewrite the archaic code that normally runs on the switchgear designed to control the flow of power to electricity infrastructure. T

    Rather than protecting a generator, Hamilton said, “Our goal is to keep the servers running.”

    Companies of all types have been struggling with the issue of efficiently using their resources for years, because they buy enough servers to ensure they can handle peak workloads and then keep them idle the rest of the time.

    Luckily, being a cloud provider lets you get well above the usual 20 percent utilization number just by nature. For starters, because AWS is constantly running “a combination of non-correlated workloads,” Hamilton explained, resource utilization just naturally levels itself out.

  41. Tomi Engdahl says:

    Why the Cloud Requires a Totally Different Data Center
    If there’s one mind-blowing statistic about Amazon Web Services, it’s the company’s scale.

    f there’s one mind-blowing statistic about Amazon Web Services, it’s the company’s scale.

    The cloud is a nascent technology, but AWS is already a multi-billion-dollar business and its cloud is reportedly five times bigger than its 14 top competitors combined, according to Gartner. Amazon’s Simple Storage Service (S3) stores more than a trillion files and processes 1.5 million requests per second. DynamoDB, the AWS-designed NoSQL database, is less than a year old and last month it already had more than 2 trillion input or output requests.

    Supplying all those services at that scale requires a lot of hardware. The cloud division is growing fast though, which means that AWS is continually adding more hardware to its data centers. A A

    How does AWS keep up with all that? The man who directs the strategy behind it, AWS Vice President and Distinguished Engineer James Hamilton, shared insights into this at the company’s re:Invent customer conference in Las Vegas last week. In a nutshell, “Scale is the enabler of everything,” he says.

    AWS has optimized its hardware for its specific use cases, he says. AWS has built custom compute, storage and networking servers, which allow the company to customize down to a granular level. Its storage servers are “far denser” than anything on the market and each weighs more than a ton, Hamilton says. Most recently AWS customized its networking gear to create routers and protocol stacks that provision high performance workloads.

    AWS even customizes its power consumption processes. The company has negotiated bulk power purchase agreements with suppliers to get the energy needed to power its dozens of data centers across nine regions of the globe

    Even with all the customization, AWS can’t always predict exactly how much of its resources will be used. If AWS can increase its utilization, its costs will be lower because it will get more bang for its buck from the hardware.

    There will still be under-utilization, but AWS has tried to turn that into an advantage. The introduction of spot-instances, which allow customers to place bids on excess instances, enables this.

  42. Tomi Engdahl says:

    Amazon Web Services runs out of (some) servers
    Cloudy concern also reveals new Linux-slurping plans

    Amazon Web Services has run out of servers. Or at least the special type of server it uses to power the new C3 instance type.

    C3 instances are “compute optimised” thanks to the presence of an Ivy Bridge Intel Xeon running at 2.8 GHz, along with a solid state disk. Launched at AWS’s desert talkfest AWS re:invent back in November, C3 instances have proved so popular AWS has admitted that “some of you are not getting the C3 capacity you’re asking for when you request it.”

    In other words, it’s out of servers

    AWS may well be ramping up its server population for another reason, because it’s just announced an expanded Linux-server-cloudification offering. Those of you running a variety of 64-bit Linux virtual machines in VMware, Xen or Hyper-V formats can now import them to AWS’ cloud, where you could either put them into production or leave them ready to fire up as a disaster recovery resource

  43. Tomi Engdahl says:

    Amazon Web Services Blog
    VM Import / Export for Linux

    If you have invested in the creation of “golden” Linux images suitable for your on-premises environment, I have some good news for you.

  44. Tomi Engdahl says:

    AWS imposes national borders on Cloudland
    ‘Geo Restriction’ feature keeps foreign undesirables away from your content

    Amazon Web Services (AWS) has drawn up borders within its cloud with a new ‘Geostriction’ feature for its CloudFront service.

    CloudFront is AWS’ content distribution offering and speeds downloads for all manner of media, often by locating it closer to users.
    Click here

    AWS can now ensure that only the users you want – or at least those within borders you desire – can access content served from CloudFront thanks to the Geo Restriction feature. Amazon says the new feature means “you can choose the countries where you want Amazon CloudFront to deliver your content.”

    You might wish to do that, AWS says, because “licensing requirements restrict some media customers from delivering movies outside a single country.”

  45. Tomi Engdahl says:

    Amazon’s ‘schizophrenic’ open source selfishness scares off potential talent, say insiders
    Moles blame Bezos for paltry code sharing

    Amazon is one of the most technically influential companies operating today – but you wouldn’t know it, thanks to a dearth of published research papers and negligible code contributions to the open-source projects it relies on.

    This, according to multiple insiders, is becoming a problem. The corporation is described as a “black hole” because improvements and fixes for the open-source software it uses rarely see the light of day. And, we’re told, that policy of secrecy comes right from the top – and it’s driving talent into the arms of its rivals.

    This secretiveness, “comes from Jeff,” claimed another source. “It’s passed down in HR training and policy. It’s all very clear.”

    Though a select few are permitted to give public talks, when they do, they disclose far less information about their company’s technology than their peers.

    “Amazon behaves a lot like a classified military agency,” explained another ex-Amazonian

    Multiple sources have speculated to us that Amazon’s secrecy comes from Jeff Bezos’ professional grounding in the financial industry, where he worked in trading systems. This field is notoriously competitive and very, very hush-hush. That may have influenced his thoughts about how open Amazon should operate, as does his role in a market where he competes with retail giants such as Walmart.

    But one contact argued that a taciturn approach may not be appropriate for the advanced technology Amazon has developed for its large-scale cloud computing business division, Amazon Web Services.

    “In the Amazon case, there is a particular schizophrenia between retail and technology, and the retail culture dominates,” explained the source. “Retail frugality is all about secrecy because margins are so small so you can’t betray anything – secrecy is a dominant factor in the Amazon culture.

    “It’s a huge cost to the company.”

  46. Tomi Engdahl says:

    AWS could ‘consider’ ARM CPUs, RISC-as-a-service
    CTO Vogels says ‘power management for ARM is considered state of the art’

    Amazon Web Services (AWS) chief technology officer Werner Vogels believes the cloudy colossus could, in the future consider using ARM CPUs, or even offering RISC-as-a-service to help those on legacy platforms enjoy cloud elasticity.

    AWS, he added, is “always looking for efficiency” and as “power management for ARM is considered state of the art” it makes sense to consider it.

  47. Tomi Engdahl says:

    AWS bins elastic compute units, adopts virtual CPUs
    Customers tired of wrapping their heads around odd computing power metric

    Gartner analyst Kyle Hilgendorf has spotted something very interesting: Amazon Web Services seems to have stopped rating cloud servers based on EC2 compute units (ECUs), its proprietary metric of computing power.

    ECUs were an odd metric, as they were based on “… the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor … equivalent to an early-2006 1.7 GHz Xeon”.

    Elastic Compute Units have been replaced with processor information and clock speed.

  48. Leonore says:

    Simply want to say your article is as astonishing.
    The clarity for your put up is just spectacular
    and that i could think you are knowledgeable on this subject.
    Well along with your permission allow me to grab your
    feed to keep updated with coming near near post.
    Thank you a million and please carry on the gratifying

  49. Tomi Engdahl says:

    Amazon CTO destealths to throw light on AWS data centre design
    All-black outfit to explain it ain’t just about the white boxen

    Ask Amazon about its AWS data centres and you’ll get get this response: Amazon doesn’t talk about its data centres. Until its chief technology officer pitches in, that is.

    AWS is becoming to many what Windows once was: a platform for doing business. It has started as something that let enterprises free themselves from the yoke of owning their own servers. Now it’s letting them deliver new services.

    Another important group of customers are internet pure plays who, again, don’t need to set up and run their own servers and infrastructure. These include everything from fundraising efforts such as Just Giving, a service for individuals and groups to raise funds online, to Omnifore – music streaming infrastructure employed by SiriusXM and Sony Music Unlimited.

    Just Giving and Omnifore sit between their customers and the raw AWS infrastructure that for the non-techie is still difficult to knit together. What they rely on are hundreds of thousands of servers and network switches that Amazon has custom designed and built, working with Intel and others. Servers are grouped into, yes, data centres, which comprise Amazon’s Availability Zones, which themslves in turn make up regions – there’s 10 regions and 28 zones.

    Each region is comprised of two or more Availability Zones and each zone has at least one data centre. No one data centre serves two Availability Zones, while some Zones are served by up to six data centres. Data centres must also be on different power grids, so no one power outage can take down a Zone.

    Availability Zones are AWS’s way to circumvent the problems of back-up and latency that traditionally dog wide-area computing. Traditionally, a company in, say, New York might have disaster back-up in New Jersey, with data also replicated across the US in Los Angeles.

    However, according to Vogels: “This old replication was deemed not fit for scale. One transaction is 1-2 milliseconds and replicating that will cost you 100 milliseconds. Then if you have to do fail over from New York to LA it’s a nightmare – failing back is even worse. Integrating a failed system into a live system is a nightmare.”

    To solve latency, Amazon built Availability Zones on groups of tightly coupled data centres. Each data centre in a Zone is less than 25 microseconds away from its sibling and packs 102Tbps of networking.

    As for those data centres, each is capped at 80,000 servers – determined to be the upper optimum limit – but contains at least 50,000. Servers are built by Amazon, working with Intel and other manufacturers. These aren’t cheap-o boxen, according to Vogels.

    “Don’t think these are white-box servers,”

    Amazon has also stripped out unwanted features that come with standard, off-the-shelf servers. Gone are audio chips and power transformers

  50. Tomi Engdahl says:

    Revealed: Why Amazon, Netflix, Tinder, Airbnb and co plunged offline
    And the dodgy database at the heart of the crash is suffering again right now

    Netflix, Tinder, Airbnb and other big names were crippled or thrown offline for millions of people when Amazon suffered what’s now revealed to be a cascade of cock-ups.

    On Sunday, Amazon Web Services (AWS), which powers a good chunk of the internet, broke down and cut off websites from people eager to stream TV, or hookup with strangers; thousands complained they couldn’t watch Netflix, chat up potential partners, find a place to crash via Airbnb, memorize trivia on IMDb, and so on.

    Today, it’s emerged the mega-outage was caused by vital systems in one part of AWS taking too long to send information to another part that was needed by customers.’

    In technical terms, the internal metadata servers in AWS’s DynamoDB database service were not answering queries from the storage systems within a particular time limit.

    DynamoDB tables can be split into partitions scattered over many servers.

    At about 0220 PT on Sunday, the metadata service was taking too long sending back answers to the storage servers.

    At that moment on Sunday, the levee broke: too many taxing requests hit the metadata servers simultaneously, causing them to slow down and not respond to the storage systems in time. This forced the storage systems to stop handling requests for data from customers, and instead retry their membership queries to the metadata service – putting further strain on the cloud.

    It got so bad AWS engineers were unable to send administrative commands to the metadata systems.

    Other services were hit by the outage: EC2 Auto Scaling, the Simple Queue Service, CloudWatch, and the AWS Console feature, suffered problems.


Leave a Comment

Your email address will not be published. Required fields are marked *