Who's who of cloud market

Seemingly every tech vendor seems to have a cloud strategy, with new products and services dubbed “cloud” coming out every week. But who are the real market leaders in this business? Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article shows Gartner’s Magic Quadrant for IaaS. Research firm Gartner’s answer lies in its Magic Quadrant report for the infrastructure as a service (IaaS) market.

It is interesting that missing from this quadrant figure are big-name companies that have invested a lot in the cloud, including Microsoft, HP, IBM and Google. The reason is that report only includes providers that had IaaS clouds in general availability as of June 2012 (Microsoft, HP and Google had clouds in beta at the time).

Gartner reinforces what many in the cloud industry believe: Amazon Web Services is the 800-pound gorilla. Gartner has also found one big minus on Amazon Web Services: AWS has a “weak, narrowly defined” service-level agreement (SLA), which requires customers to spread workloads across multiple availability zones. AWS was not the only provider where there was something negative to say on the service-level agreement (SLA) details.

Read the whole Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article to see the Gartner’s view on clould market today.

1,065 Comments

  1. Tomi Engdahl says:

    CoreOS chief decries cloud lock-in
    Adds Kubernetes and etcd as services in Tectonic
    https://www.theregister.co.uk/2017/05/31/coreos_decries_cloud_lockin/

    oreOS CEO Alex Polvi spent his morning on Wednesday biting the hands that fed attendees at his company’s conference, CoreOS Fest 2017.

    “Every shift in infrastructure that we’ve seen … has promised more efficiency, reliability and agility,” said Polvi. “But every single one has resulted in a massive proprietary software vendor that has undermined all the work done in the free software community. And we’re beginning to believe cloud is looking the same.”

    As Polvi proceeded to hang Amazon Web Services as his pinata, he acknowledged the awkwardness of his line of argument because Amazon, Google, IBM, and Microsoft were among the cloud service providers paying for pastries and the like at the event.

    While the compute component of cloud services has become relatively commoditized, the higher-order services available on cloud platforms, like databases, can lock customers in, Polvi insisted.

    Reply
  2. Tomi Engdahl says:

    The Linux cloud swap that spells trouble for Microsoft and VMware
    Containers just wanna be hypervisors
    https://www.theregister.co.uk/2017/06/01/linux_open_source_container_threat_to_vmware_microsoft/

    Just occasionally, you get it right. Six years ago, I called containers “every sysadmin’s dream,” and look at them now. Even the Linux Foundation’s annual bash has been renamed from “LinuxCon + CloudOpen + Embedded Linux Conference” to “LinuxCon + ContainerCon”.

    Why? Because since virtualization has been enterprise IT’s favourite toy for more than a decade, the rise of “cloud computing” has promoted this even more. When something gets that big, everyone jumps on board and starts looking for an edge – and containers are much more efficient than whole-system virtualization, so there are savings to be made and performance gains to win. The price is that admins have to learn new security and management skills and tools.

    But an important recent trend is one I didn’t expect: these two very different technologies beginning to merge.

    Traditional virtualization is a special kind of emulation: you emulate a system on itself. Mainframes have had it for about 40 years, but everyone thought it was impossible on x86. All the “type 1″ and “type 2 hypervisor” stuff is marketing guff – VMware came up with a near-native-speed PC emulator for the PC. It’s how everything from KVM to Hyper-V works. Software emulates a whole PC, from the BIOS to the disks and NICs, so you can run one OS under another.

    It’s conceptually simple. The hard part was making it fast. VMware’s big innovation was running most of the guest’s code natively, and finding a way to trap just the “ring 0″ kernel-mode code and run only that through its software x86 CPU emulation. Later, others worked out how and did the same, then Intel and AMD extended their chips to hardware-accelerate running ring-0 code under another OS – by inserting a “ring -1″ underneath.

    But it’s still very inefficient.

    Yes, it’s improved, there are good management tools and so on, but all PC OSes were designed around the assumption that they run on their own dedicated hardware. Virtualization is still a kludge – but just one so very handy that everyone uses it.

    That’s why containers are much more efficient: they provide isolation without emulation. Normal PC OSes are divided into two parts: the kernel and drivers in ring 0, and all the ordinary unprivileged code – the “GNU” part of GNU/Linux – and your apps, in ring 3.

    With containers, a single kernel runs multiple separate, walled-off userlands (the ring 3 stuff). Each thinks it’s the only thing on the machine. But the kernel keeps total control of all the processes in all the containers.

    There’s no emulation, no separate memory spaces or virtual disks. A single kernel juggles multiple processes in one memory space, as it was designed to do. It doesn’t matter if a container holds one process or a thousand. To the kernel, they’re just ordinary programs – they load and can be paused, duplicated, killed or restarted in milliseconds.

    The hypervisor that isn’t a hypervisor

    Canonical has come up with something like a combination – although it admittedly has limitations. Its LXD “containervisor” runs system containers – ones holding a complete Linux distro from the init system upwards. The “container machines” share nothing but the kernel, so they can contain different versions of Ubuntu to the host – or even completely different distros.

    LXD uses btrfs or zfs to provide snapshotting and copy-on-write, permitting rapid live-migration between hosts. Block devices on the host – disk drives, network connections, almost anything – can be dedicated to particular containers, and limits set, and dynamically changed, on RAM, disk, processor and IO usage. You can change how many CPU cores a container has on the fly, or pin containers to particular cores.

    … and containers that aren’t really containers

    What’s the flipside of trying to make containers look like VMs? A hypervisor trying very hard to make VMs look like containers, complete with endorsement from an unexpected source.

    When IBM invented hypervisors back in the 1960s, it created two different flavours of mainframe OS – ones designed to host others in VMs, and other radically different ones designed solely to run inside VMs.

    Some time ago, Intel modified Linux into something akin to a mainframe-style system: a dedicated guest OS, plus a special hypervisor designed to run only that OS. The pairing of a hypervisor that will only run one specific Linux kernel, plus a kernel that can only run under that hypervisor, allowed Intel to dispense with a lot of baggage on both sides.

    The result is a tiny, simple hypervisor and tiny VMs, which start in a fraction of a second and require a fraction of the storage of conventional ones, with almost no emulation involved. In other words, much like containers.

    Intel announced this under the slightly misleading banner of “Clear Containers” some years ago. It didn’t take the world by storm, but slowly, support is growing. First, CoreOS added support for Clear Containers into container-based OSes. Later, Microsoft added it to Azure. Now, though, Docker supports it, which might speed adoption.

    Summary? Now both Docker and CoreOS rkt containers can be started in actual VMs, for additional isolation and security – whereas a Linux distro vendor is offering a container system that aims to look and work like a hypervisor. These are strange times.

    Reply
  3. Tomi Engdahl says:

    The Ridiculous Bandwidth Costs of Amazon, Google and Microsoft Cloud Computing
    https://www.arador.com/ridiculous-bandwidth-costs-amazon-google-microsoft/

    In this article I compare the costs of network bandwidth transferred out of Amazon EC2, Google Cloud Platform, Microsoft Azure and Amazon Lightsail.

    Bandwidth costs are one of the most ridiculously expensive components of cloud computing, and there are some serious inconsistencies in the industry

    Conclusion

    Amazon EC2, Microsoft Azure and Google Gloud Platform are all seriously screwing their customers over when it comes to bandwidth charges.

    Every one of the big three has massive buying power yet between them their average bandwidth price is 3.4x higher than colocation facilities.

    If you move a significant amount of data you should think twice before moving to the cloud

    Want to disrupt the cloud computing industry? Give bandwidth away at cost.

    For the record “I LOVE AMAZON AWS!” It’s super flexible and awesome – I just don’t like their bandwidth pricing

    Reply
  4. Tomi Engdahl says:

    Data Center Incident Reporting Network announced
    https://thestack.com/data-centre/2017/06/06/data-center-incident-reporting-network-announced/

    The UK Data Center Interest Group, a not-for-profit organization focused on data center technologies, best practices and policy, has announced the formation of the Data Center Incident Reporting Network (DCIRN).

    The incident reporting network will be a resource for operators to share information about data center failures confidentially so that the industry as a whole can learn from the failures that have occurred. The goal of the DCIRN is to improve the reliability of data centers worldwide by collecting and analyzing information related to failures.

    Reply
  5. Tomi Engdahl says:

    HPE pushes for greater hybrid IT innovation
    https://thestack.com/cloud/2017/06/06/hpe-pushes-for-greater-hybrid-it-innovation/

    As HPE Discover 2017 opens in Las Vegas, the tech giant is looking to win over customers to its Gen10 suite of hybrid cloud solutions.

    The servers and storage arm of the now-split HP is standing by the argument that the majority of businesses are approaching their hybrid cloud infrastructure strategies in the wrong way.

    According to a report from The Register, Ric Lewis, SVP and general manager at HPE’s software-defined and cloud unit claims that most private clouds are simply virtual machine (VM) farms – ‘It is not really a private cloud that seems like the public cloud where you have available services and you are maximizing on that.’

    HPE: You’re rubbish at hybrid cloud – so we’ll cook a NüStack to fix it
    Spinning up VMs is so 2010. What you need now are services
    https://www.theregister.co.uk/2017/06/06/hpe_youre_all_terrible_at_hybrid_it_and_were_the_only_ones_that_can_help/

    HPE Discover 2017 HPE is looking to win customers for its Gen10 suite of hybrid cloud enterprise IT platform by first offering them some tough love.

    The servers and storage half of the broken-up HP says most enterprises simply aren’t doing hybrid cloud infrastructure right.

    “If you look at the state of most private clouds, they are just VM farms,” said Ric Lewis, SVP and general manager of HPE’s software defined and cloud group.

    “It is not really a private cloud that seems like the public cloud where you have available services and you are maximizing on that.”

    Reply
  6. Tomi Engdahl says:

    Tom Krazit / GeekWire:
    Google announces release of Spinnaker 1.0, an open-source multi-cloud continuous delivery platform

    Spinnaker, an open-source project for continuous delivery, hits the 1.0 milestone
    https://www.geekwire.com/2017/spinnaker-open-source-project-continuous-delivery-hits-1-0-milestone/

    Spinnaker, an open-source project that lets companies improve the speed and stability of their application deployment processes, reached the 1.0 release milestone Tuesday.

    Google announced the 1.0 release of Spinnaker, which was originally developed inside Netflix and enhanced by Google and a few other companies. The software is used by companies like Target and Cloudera to enable continuous delivery, a modern software development concept that holds application updates should be delivered when they are ready, instead of on a fixed schedule.

    Spinnaker is just another one of the open-source projects that are at the heart of modern cloud computing

    Spinnaker is probably still best for early adopters, but continuous delivery in general is one of the many advances in software development enabled by cloud computing that will likely be an industry best practice in a few years.

    In an interesting move, Google took pains to highlight the cross-platform nature of Spinnaker, noting that will run across several different cloud providers and application development environments. Google is chasing cloud workloads that tend to go to Amazon Web Services or Microsoft Azure, and noted “whether you’re releasing to multiple clouds or preventing vendor lock-in, Spinnaker helps you deploy your application based on what’s best for your business.”

    Reply
  7. Tomi Engdahl says:

    Specsavers embraces Azure and AWS, recoils at Oracle’s ‘wow’ factor
    Warms IBM Watson for patient data probe
    https://www.theregister.co.uk/2017/06/13/specsavers_says_no_to_oracle_cloud/

    Oracle’s cloud has been judged too risky, too expensive and not up to scratch by Specsavers, which is aiming to complete an AWS and Azure combo next year.

    And, in another plus for Microsoft, Specsavers (the British optical retail chain) is adopting Office 365 over Google Docs, saying Microsoft is cheaper.

    The move comes as the 33-year-old retailer has cut its IT infrastructure and network of 200 legacy suppliers to just 25 in a modernisation project.

    Specsavers ditched a tapestry of accounts payable systems in favour of Oracle ERP – but Oracle failed to make the grade on cloud, meaning it will float on the enemy: AWS.

    Reply
  8. Tomi Engdahl says:

    Christine Hall / Data Center Knowledge:
    Microsoft joins open source PaaS project Cloud Foundry Foundation as a gold member

    Microsoft Joins Hot Open Source PaaS Project Cloud Foundry
    http://www.datacenterknowledge.com/archives/2017/06/13/microsoft-joins-hot-open-source-paas-project-cloud-foundry/

    The Cloud Foundry Summit Silicon Valley opened in Santa Clara, California, today with announcements by the Cloud Foundry Foundation of a new certification for developers as well as a new member, Microsoft.

    Several years ago, news of Redmond shelling out bucks to become a card carrying member of an open source project would’ve been heresy. After all, this is a company that spent the better part of two decades doing its best to wipe open source — along with its flagship operating system Linux — off the face of the earth. Times have changed. Now the company openly professes its “love” for all things open. So much so that last year it pledged $500 million yearly to became a top tier Platinum member of the Linux Foundation.

    The change of heart was caused by the advent of the cloud, which moved “free” Linux and open source from being a major competitor to becoming products contributing greatly to the company’s bottom line. With Azure, the company sells — or more precisely rents — Linux as a service (which I’ll refrain from calling “LaaS”), along with OpenStack, Hadoop, Docker, Kubernetes, MongoDB and all of the other open source applications that are essential to enterprise IT.

    That list would include Cloud Foundry, a PaaS platform used in both private data centers and in the public cloud to quickly deploy network apps or services. The Cloud Foundry Foundation, which oversees the application’s development, is an independent not-for-profit Linux Foundation Collaborative Project.

    Microsoft joins the organization as a second-tier gold member, which will cost it $100,000 a year

    Abby Kearns, the Cloud Foundry’s executive director, indicated she thinks Redmond will be a good fit. “Microsoft is widely recognized as one of the most important enterprise technology and cloud providers in the world,” she said. “Cloud Foundry is the most widely deployed cloud application platform in the enterprise, and is used by most Fortune 500 organizations. We share both a tremendous number of users and a common approach to the enterprise cloud.”

    According to foundation figures, there are currently 250,000 job openings for software developers in the U.S., 500,000 unfilled positions requiring tech skills, and expected growth of more than one million in the next decade.

    Reply
  9. Tomi Engdahl says:

    Jacob Kastrenakes / The Verge:
    Google says it plans to launch its full desktop backup tool, called Backup and Sync, for Google Drive on June 28, available as an app — Google is turning Drive into a much more robust backup tool. Soon, instead of files having to live inside of the Drive folder, Google will be able …

    Google Drive will soon back up your entire computer
    https://www.theverge.com/2017/6/14/15802200/google-backup-and-sync-app-announced-drive-feature

    Google is turning Drive into a much more robust backup tool. Soon, instead of files having to live inside of the Drive folder, Google will be able to monitor and backup files inside of any folder you point it to. That can include your desktop, your entire documents folder, or other more specific locations.

    The backup feature will come out later this month, on June 28th, in the form of a new app called Backup and Sync. It sounds like the Backup and Sync app will replace both the standard Google Drive app and the Google Photos Backup app, at least in some cases. Google is recommending that regular consumers download the new app once it’s out, but it says that business users should stick with the existing Drive app for now.

    Backup and Sync from Google available soon
    https://gsuiteupdates.googleblog.com/2017/06/backup-and-sync-from-google-available.html

    On June 28th, 2017, we will launch Backup and Sync from Google, a tool intended to help everyday users back up files and photos from their computers, so they’re safe and accessible from anywhere. Backup and Sync is the latest version of Google Drive for Mac/PC, which is now integrated with the Google Photos desktop uploader. As such, it will respect any current Drive for Mac/PC settings in the Admin console.

    Reply
  10. Tomi Engdahl says:

    TechCrunch:
    Microsoft buys Tel Aviv-based Cloudyn to incorporate the startup’s cloud management products into its portfolio; sources say the price was $50M-$70M — Back in April, we began hearing that Microsoft was in the process of buying Israeli cloud startup Cloudyn, a company that helps customers manage …

    Microsoft confirms Cloudyn acquisition, sources say price is between $50M and $70M
    https://techcrunch.com/2017/06/29/microsoft-finally-pulls-trigger-on-cloudyn-deal/

    Back in April, we began hearing that Microsoft was in the process of buying Israeli cloud startup Cloudyn, a company that helps customers manage their cloud billing across multiple clouds. It’s taken a while to work through the terms, but today Microsoft finally made it official.

    Sources tell TechCrunch the price was between $50 million and $70 million.

    In a company blog post today, Microsoft’s Jeremy Winter wrote, “I am pleased to announce that Microsoft has signed a definitive agreement to acquire Cloudyn, an innovative company that helps enterprises and managed service providers optimize their investments in cloud services.”

    As companies continue to pursue a multi-cloud strategy, this gives Microsoft a cloud billing and management solution that provides it with an advantage over competitors, particularly AWS and Google Cloud Platform.

    Reply
  11. Tomi Engdahl says:

    Larry Dignan / ZDNet:
    Nutanix announces strategic partnership with Google Cloud and unveils new tools for hybrid cloud management

    Google Cloud Platform, Nutanix forge hybrid cloud strategic pact
    http://www.zdnet.com/article/google-cloud-platform-nutanix-forge-hybrid-cloud-strategic-pact/

    Google Cloud and ​Nutanix joint customers will be able to manage on-premises and public cloud infrastructure as one unified service.

    Reply
  12. Tomi Engdahl says:

    Mary Jo Foley / ZDNet:
    Microsoft Azure Stack hardware, which allows customers to run private instances of Azure in their datacenters, is ready to order from Dell EMC, HPE and Lenovo — Microsoft’s first three server partners are starting to take orders for Microsoft’s hybrid-computing Azure Stack appliances.

    Microsoft Azure Stack is ready to order from Dell EMC, HPE, and Lenovo
    http://www.zdnet.com/article/microsoft-azure-stack-is-ready-to-order-from-dell-emc-hpe-and-lenovo/

    Microsoft’s first three server partners are starting to take orders for Microsoft’s hybrid-computing Azure Stack appliances.

    Microsoft’s original three server partners for its Azure Stack hybrid computing appliance are officially taking orders as of today, July 10.
    azurestackordernow.jpg

    Microsoft hasn’t yet made the final Azure Stack code available to Dell EMC, HPE, and Lenovo, but the three are expecting to start shipping their first Azure Stack servers to customers within the next couple of months.

    Update: A Microsoft spokesperson said today, July 10, that its partners are now “in validation mode based on the code shipped in the Azure Stack Development Kits (ASDK).” The Azure Stack servers should begin shipping to customers starting in September 2017.

    Microsoft officials recently made public a downloadable Azure Stack pricing and licensing datasheet, signaling the product’s imminent availability.

    Microsoft is describing Azure Stack as “an extension of Azure.” After the initial purchase of Azure Stack, customers will only pay for Azure services that they use from general availability, forward (“pay-as-you-use” pricing). The current one-node offering meant for dev/test will continue to be free after general availability.

    Microsoft is touting Azure Stack as a truly consistent hybrid-cloud platform. It will allow users to use Azure public cloud services against data stored in Azure Stack on premises, and deploy the same Azure-services-based applications on both the public Azure cloud and Azure Stack.

    https://t.co/5oUnU5hC3m

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*