Who's who of cloud market

Seemingly every tech vendor seems to have a cloud strategy, with new products and services dubbed “cloud” coming out every week. But who are the real market leaders in this business? Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article shows Gartner’s Magic Quadrant for IaaS. Research firm Gartner’s answer lies in its Magic Quadrant report for the infrastructure as a service (IaaS) market.

It is interesting that missing from this quadrant figure are big-name companies that have invested a lot in the cloud, including Microsoft, HP, IBM and Google. The reason is that report only includes providers that had IaaS clouds in general availability as of June 2012 (Microsoft, HP and Google had clouds in beta at the time).

Gartner reinforces what many in the cloud industry believe: Amazon Web Services is the 800-pound gorilla. Gartner has also found one big minus on Amazon Web Services: AWS has a “weak, narrowly defined” service-level agreement (SLA), which requires customers to spread workloads across multiple availability zones. AWS was not the only provider where there was something negative to say on the service-level agreement (SLA) details.

Read the whole Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article to see the Gartner’s view on clould market today.

1,065 Comments

  1. Tomi Engdahl says:

    Amazon AWS S3 outage is breaking things for a lot of websites and apps
    https://techcrunch.com/2017/02/28/amazon-aws-s3-outage-is-breaking-things-for-a-lot-of-websites-and-apps/

    Amazon’s S3 web-based storage service is experiencing widespread issues, leading to service that’s either partially or fully broken on websites, apps and devices upon which it relies. The AWS offering provides hosting for images for a lot of sites, and also hosts entire websites, and app backends including Nest.

    The S3 outage is due to “high error rates with S3 in US-EAST-1,” according to Amazon’s AWS service health dashboard, which is where the company also says it’s working on “remediating the issue,” without initially revealing any further details.

    Affected websites and services include Quora, newsletter provider Sailthru, Business Insider, Giphy, image hosting at a number of publisher websites, filesharing in Slack, and many more. Connected lightbulbs, thermostats and other IoT hardware is also being impacted, with many unable to control these devices as a result of the outage.

    Amazon S3 is used by around 148,213 websites, and 121,761 unique domains, according to data tracked by SimilarTech, and its popularity as a content host concentrates specifically in the U.S. It’s used by 0.8 percent of the top 1 million websites, which is actually quite a bit smaller than CloudFlare, which is used by 6.2 percent of the top 1 million websites globally – and yet it’s still having this much of an effect.

    Reply
  2. Tomi Engdahl says:

    [RESOLVED] Increased Error Rates for Amazon S3

    Update at 2:08 PM PST: As of 1:49 PM PST, we are fully recovered for operations for adding new objects in S3, which was our last operation showing a high error rate. The Amazon S3 service is operating normally.

    Source: https://status.aws.amazon.com/

    Reply
  3. Tomi Engdahl says:

    Cloud to account for 92% of data center traffic by 2020: Cisco
    http://www.cablinginstall.com/articles/pt/2016/11/cloud-to-account-for-92-of-data-center-traffic-by-2020-cisco.html

    In the latest edition of its “Global Cloud Index” report, Cisco collected data from organizations like Gartner, IDC, Juniper Research, Ovum, Synergy, ITU, and the United Nations, and combined it with its own networking metrics. According to the results, an estimated 68 percent of cloud workloads will be deployed in public cloud data centers by 2020, up from 49 percent in 2015.

    Reply
  4. Tomi Engdahl says:

    Survey finds 25% of healthcare organizations put patient data at risk in the public cloud
    http://www.cablinginstall.com/articles/2017/02/hytrust-healthcare-survey.html?cmpid=enl_cim_cimdatacenternewsletter_2017-02-28

    HyTrust Inc., a provider of technology that automates security controls for software-defined computing, networking and storage workloads, has announced its latest Cloud Survey report, analyzing healthcare organizations use of the public cloud, the utilization of public cloud implementations, and how data is protected in these cloud environments. The survey of 51 healthcare and biotech organizations found that 25 percent of healthcare organizations using the public cloud do not encrypt their data.

    HyTrust — whose stated mission is “to make private, public and hybrid cloud infrastructure more trustworthy for enterprises, service providers and government agencies” — says the survey also found that 63 percent of healthcare organizations say they intend to use multiple cloud vendors. “Multi-cloud adoption continues to gain momentum among leading healthcare organizations,”

    “choosing a flexible cloud security solution that is effective across multiple cloud environments is not only critical to securing patient data, but to remaining HIPAA-compliant. What is troubling, is that 38 percent of organizations that have data deployed in a multi-cloud environment that included Amazon Web Service (AWS) and [Microsoft] Azure are not using any form of encryption. This vulnerability comes as 82 percent of healthcare organizations believe security is their top concern, followed by cost.”

    Key findings of the survey also included the following bullet points: 63 percent of healthcare organizations are currently using the public cloud; 25 percent of healthcare organizations using the public cloud are not encrypting their data; 63 percent of healthcare IT decision makers intend to use multiple cloud vendors.

    Reply
  5. Tomi Engdahl says:

    The deep blue, I mean, the deep Azure sky before me
    https://www.mentor.com/embedded-software/blog/post/the-deep-blue-i-mean-the-deep-azure-sky-before-me-6add59a5-e276-4408-8a4b-30b67cde748e?contactid=1&PC=L&c=2017_02_28_esd_newsletter_update_v2

    Businesses are indeed implementing various IoT systems and collecting data from the devices in those systems. Of course, they’ve been doing this for some time now, but today’s technology enablement and business pressures are pushing them to collect more data, and to use that data in advanced analytics for functions like predictive or prescriptive maintenance – and eventually for machine learning. At the basis of these systems are smart devices. One keynote presenter at ARC made a specific point that the intelligent factories of the future would not be possible without these smart devices, which provide the data and information that enable the advanced analytics. So, it all starts with smart devices.

    Regarding the commercial cloud, it’s very clear that Microsoft Azure is the choice for both plant operators and the manufacturers of the equipment. Azure was mentioned many times in keynotes and sessions. In talking to one veteran editor about our Azure strategy, he simply said “everybody here is using Azure.” While talking about Azure to one of Mentor’s current customers, he told me that “Azure is the only cloud for industrial businesses.”

    One element of our investment is to integrate the Microsoft Azure software development kits (SDKs) with our Mentor Embedded Linux and Nucleus real-time operating system (RTOS) platforms. This integration provides device manufacturers and their downstream customers with integrated and intrinsic connectivity to the Azure cloud.

    Once connectivity is established, data can be pushed seamlessly from the smart edge device to the Azure IoT hub. This connectivity makes the data available to the massive breadth of Microsoft’s cloud services, which can then be leveraged by customer-specific advanced analytics and cloud applications. Why do device manufacturers care? Well, it gets back to the competitive situation: they need to focus their scarce resources on differentiated functionality, reduced risk, and how to get to market quickly.

    In summary, Mentor’s platforms integrated with Microsoft Azure SDKs combined with our ability to provide deeply embedded device information to customers makes the smart device even smarter. Because Microsoft Azure is the cloud of choice for the industrial automation market, we are strategically aligned to help our customers and their downstream customers be more successful in the realization phase of their digitalized systems and business models.

    Reply
  6. Tomi Engdahl says:

    The Amazon S3 Outage Is What Happens When One Site Hosts Too Much of the Internet
    https://www.wired.com/2017/02/happens-one-site-hosts-entire-internet/

    If you’ve been having trouble using some of your favorite apps today, you’re not alone. Users have reported trouble with sites and apps like Medium, Slack, and Trello.

    The problems seem to stem from trouble with Amazon’s cloud storage service S3, which Amazon confirmed is experiencing “high error rates,” particularly on the East Coast. Several other Amazon services appear to be having problems as well, but countless sites rely on S3 to host images and other files. Even Amazon’s site itself relies on S3, leading to some baffling updates from the company.

    The outages bring to mind the attack on an internet company called Dyn last October that brought much of the web to its knees. Technologically, the S3 outage doesn’t bear much resemblance to the Dyn incident, but the effect is similar: So many sites and apps are down that it feels almost like the internet itself is malfunctioning. That flies right in the face of the promise of the internet.

    Amazon outage and the attack on Dyn prove, the internet is actually pretty brittle.

    The “winner takes all” dynamic of the tech industry concentrates more and more power into fewer and fewer companies. That consolidation has implications for competition but also affects the resilience of the internet itself. So many people rely on Gmail that when the service goes down, it’s as if email itself has gone offline, even though countless other email providers exist. Facebook is practically synonymous with the internet for many people all over the world.

    Amazon plays its own outsized role. Amazon won’t say exactly how big its cloud is, but in 2012 one analyst estimated that Amazon hosted around 1 percent of the entire web. It has only grown since then

    The S3 storage service alone hosts about 1.6 times more data than its major competitors combined, according to the analyst firm Gartner.

    Even many sites not fully hosted by Amazon take advantage of its Cloudfront service

    According to the firm Datanyze, Cloudfront is by far the most widely used service of its kind. Meanwhile, Google and Microsoft—two other giants—have emerged as Amazon’s major cloud competitors.

    Amazon’s cloud itself relies on the decentralization of the internet. It has servers all over the world, though customers generally pick which regions to host their data. Even within a region, Amazon has multiple data centers in case one goes offline. But Amazon occasionally runs into problems that knock out services for an entire region.

    Reply
  7. Tomi Engdahl says:

    Massive Amazon cloud service outage disrupts sites
    http://www.usatoday.com/story/tech/news/2017/02/28/amazons-cloud-service-goes-down-sites-scramble/98530914/

    A number of websites became unavailable Tuesday after Amazon’s website hosting service went down unexpectedly. Though the majority of sites affected have since gone back online, some appear to still be facing issues. AP

    Reply
  8. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    GitHub debuts $21/mo. Business offering that brings SAML single sign-on, user provisioning, and more to GitHub.com rather than being hosted on corporate servers — GitHub is expanding its offering for large companies today. The service, which allows developers to more effectively collaborate …

    GitHub brings its enterprise service to the cloud
    https://techcrunch.com/2017/03/01/github-brings-its-enterprise-service-to-the-cloud/

    GitHub is expanding its offering for large companies today. The service, which allows developers to more effectively collaborate and share their source code, already offered an enterprise version of its tools that large companies could host in their own data centers, AWS or Azure. Now, it is launching a new hosted service of GitHub that, just like the enterprise version, will cost $21 per month and user.

    Reply
  9. Tomi Engdahl says:

    $310m AWS S3-izure: Why everyone put their eggs in one region
    Lessons learned from Tuesday’s cloud, er, fog storage mega-failure
    https://www.theregister.co.uk/2017/03/02/aws_s3_meltdown/

    The system breakdown – or as AWS put it, “increased error rates” – knocked out a single region of the AWS S3 storage service on Tuesday. That in turn brought down AWS’s hosted services in the region, preventing EC2 instances from launching, Elastic Beanstalk from working, and so on. In the process, organizations from Docker and Slack to Nest, Adobe and Salesforce.com had some or all of their services knocked offline for the duration.

    According to analytics firm Cyence, S&P 500 companies alone lost about $150m (£122m) from the downtime, while financial services companies in the US dropped an estimated $160m (£130m).

    The epicenter of the outage was one region on the east coast of America: the US-East-1 facility in Virginia. Due to its lower cost and familiarity with application programmers, that one location is an immensely popular destination for companies that use AWS for their cloud storage and virtual machine instances.

    Coders are, ideally, supposed to spread their software over multiple regions so any failures can be absorbed and recovered from. This is, to be blunt, too difficult to implement for some developers; it introduces extra complexity which means extra bugs, which makes engineers wary; and it pushes up costs.

    For instance, for the first 50TB, S3 storage in US-East-1 costs $0.023 per GB per month compared to $0.026 for US-West-1 in California. Transferring information between apps distributed across multiple data centers also costs money:

    Then there are latency issues, too

    “Being the oldest region, and the only public region in the US East coast until 2016, it hosts a number of their earliest and largest customers,”

    After US-East-1′s cloud buckets froze and services vanished, some developers discovered their code running in other regions was unable to pick up the slack for various reasons.

    “It is hard to say exactly what happened, but I would speculate that whatever occurred created enough of an issue that multiple sites attempted to fail over to other zones or regions simultaneously,” Charles King, principal analyst with Pund-IT, told El Reg.

    “It’s like trying to pour one hundred gallons of water through a one gallon hose, and you end up with what looks like a massive breakdown.”

    The takeaway, say the industry analysts, is that companies should consider building redundancy into their cloud instances just as they would for on-premises systems. This could come in the form of setting up virtual machines in multiple regions or sticking with the hybrid approach of keeping both cloud and on-premises systems. And, just like testing backups, testing that fail overs actually work.

    While the outage will probably do little to slow the move of companies into cloud services, it could give some a reason to pause, and that might not be a bad thing.

    “The biggest takeaway here is the need for a sound disaster recovery architecture and a plan that meets the needs and constraints of the application. This may be through usage of multiple regions, multiple clouds, or other fallback configurations.”

    Reply
  10. Tomi Engdahl says:

    Nat Levy / GeekWire:
    Amazon says AWS outage was caused by human error during routine server maintenance, will make changes to prevent future problems

    Amazon explains big AWS outage, says employee error took servers offline, promises changes
    http://www.geekwire.com/2017/amazon-explains-massive-aws-outage-says-employee-error-took-servers-offline-promises-changes/

    Amazon has released an explanation of the events that caused the big outage of its Simple Storage Service Tuesday, also known as S3, crippling significant portions of the web for several hours.

    Amazon said the S3 team was working on an issue that was slowing down its billing system. Here’s what happened, according to Amazon, at 9:37 a.m. Pacific, starting the outage: “an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.”

    Those servers affected other S3 “subsystems,” one of which was responsible for all metadata and location information in the Northern Virginia data centers. Amazon had to restart these systems and complete safety checks, a process that took several hours. In the interim, it became impossible to complete network requests with these servers. Other AWS services that relied on S3 for storage were also affected.

    About three hours after the issues began, parts of S3 started to function again. By about 1:50 p.m. Pacific, all S3 systems were back to normal. Amazon said it has not had to fully reboot these S3 systems for several years, and the program has grown extensively since then, causing the restart to take longer than expected.

    Amazon said it is making changes as a result of this event, promising to speed up recovery time of S3 systems.

    Reply
  11. Tomi Engdahl says:

    Matt Weinberger / Business Insider:
    Amazon Games VP Mike Frazzini on how Twitch helps game developers sell more, integration with AWS, AWS’ Lumberyard platform, and why Amazon built its own game

    Amazon’s video game boss just explained where its $970 million Twitch purchase fits into its most profitable business
    http://www.businessinsider.com/amazon-vp-mike-frazzini-explains-twitch-and-amazon-web-services-2017-3?op=1&r=US&IR=T&IR=T

    Why it would spend so much cash for Twitch was a real head-scratcher — live game broadcasts on the internet aren’t exactly what you would call core to Amazon’s retail business.

    Things got murkier in late 2016 when Amazon announced that it was getting into the game business directly with three new PC titles, starting with multiplayer brawler “Breakaway.”

    The big question at Amazon, he says, is “Who are our customers and how do we help them?” In the case of game developers, “they want to spend as much time as possible on creative and as little as possible on everything else.”

    To that end, Amazon is exploring what Frazzini calls “two fascinating frontiers” with regards to games: “Crowd” and “cloud.” Which is where Twitch and the $12 billion Amazon Web Services cloud computing behemoth come in, and where they play so well together.

    “Games have always been about communities,” says Frazzini.

    That means opportunity for Amazon, which has been giving game developers new ways to hook Twitch features into their games.

    Those features help developers turn their games into durable businesses. A vibrant community keeps a game alive, driving sales of the core experience and any other premium content that comes later. And those communities are increasingly born on Twitch.

    Those Twitch integrations are also a big part of the “cloud” piece of the puzzle. Amazon Web Services is already Amazon’s most profitable unit, offering access to fundamentally unlimited supercomputing power on a pay-as-you-go basis.

    Game studios, Fortune 500 companies, and pretty much every other type of software-related business are at least looking at cloud services from AWS, or its rivals, Microsoft Azure and Google Cloud.

    Amazon’s forthcoming Breakaway, built on AWS Lumberyard with all kinds of Twitch integrations to make it more appealing to streamers.

    Reply
  12. Tomi Engdahl says:

    Google opens cloudy cannery to let you cram code into containers
    ‘Cloud Container Builder’ offers 120 minutes of container creation, for any platform
    https://www.theregister.co.uk/2017/03/07/google_cloud_container_builder/

    Google’s found another way to wrap developers more closely into its warm embrace: a cloudy software build environment it reckons should be free for most users.

    The new “Cloud Container Builder” has reached general availability status after a year running Google App Engine’s gcloud app deploy operation.

    Described as a “stand-alone tool for building container images regardless of deployment environment”, Cloud Container Builder’s sweetener is 120 minutes a day of free build time. If you need more, it’ll set you back US$0.0034 per minute.

    The Chocolate Factory reckons this means “most users” can move builds to the cloud free, and get rid of the overhead of managing their own build servers.

    Specs of Cloud Container Builder include:

    A REST API and a gcloud command line interface;
    Two additions to the Google Cloud console, so users can track their build history, and create build triggers.

    “Build triggers lets you set up automated CI/CD workflows that start new builds on source code changes. Triggers work with Cloud Source Repository, Github, and Bitbucket on pushes to your repository based on branch or tag”, its blog note says.

    Not everybody wants Docker, so Mountain View also supports open source builders for “languages and tasks like npm, git, go and the gcloud command line interface”, and DockerHub images like “Maven, Gradle and Bazel work out of the box.”

    Google Cloud Container Builder: a fast and flexible way to package your software
    https://cloudplatform.googleblog.com/2017/03/Google-Cloud-Container-Builder-a-fast-and-flexible-way-to-package-your-software.html?m=1

    Reply
  13. Tomi Engdahl says:

    Todd Bishop / GeekWire:
    Media rendering firm Thinkbox announces that it has been acquired by Amazon, to join AWS — Amazon Web Services has acquired Thinkbox Software, which makes technology used by media and entertainment architects and engineers to manage render farms — large systems for processing computer-generated graphics and video.

    Amazon Web Services makes another deal, acquiring Thinkbox media rendering tech company
    http://www.geekwire.com/2017/amazon-web-services-acquires-thinkbox-software-media-rendering-technology-company/

    Amazon Web Services has acquired Thinkbox Software, which makes technology used by media and entertainment architects and engineers to manage render farms — large systems for processing computer-generated graphics and video.

    “We’ll be joining the Amazon Web Services family, and we’re looking forward to working together to deliver exciting customer offerings,” wrote Thinkbox in an announcement on its site today. “At this point, it’s still business as usual for us. We’ll continue to provide you, our customers, with remarkable support whether you work on-prem, in the cloud or both.”

    Reply
  14. Tomi Engdahl says:

    Google Cloud touts major enterprise customers as it runs to catch up to AWS
    http://www.zdnet.com/article/google-cloud-touts-major-enterprise-customers-as-it-runs-to-catch-up-to-aws/

    At Google Next, Google shows off several major enterprise customers, illustrating that it’s a viable player in a multicloud world.

    It was only six months ago that Google rebranded its enterprise business as Google Cloud. On Wednesday at the Google Next conference in San Francisco, Google Cloud SVP Diane Greene touted Google Cloud’s enterprise clout, trotting out major customers signing onto the Google Cloud Platform (GCP), like Colgate-Palmolive, Home Depot and Disney.

    “Does anybody not agree it’s the biggest thing going on in IT right now?” Greene said of the cloud. In addition to seeing “unbelievable acceleration” in Google’s cloud business, “the quality of our customer conversations are really changing,” she said.

    Reply
  15. Tomi Engdahl says:

    Encoding.com: Cloud In, Flash Out
    http://www.btreport.net/articles/2017/03/encoding-com-cloud-in-flash-out.html?cmpid=enl_btr_weekly_2017-03-09

    According to Encoding.com, cloud-based media processing is on the rise, and the Flash video format is in decline. The company’s third annual report analyzes trends in video formats for OTT, MVPD, web and mobile distribution. Among the findings:

    Cloud-based media processing has grown significantly due to faster transit speeds and other factors.
    Flash continued to decline in usage with projections to disappear within 12 months.
    VP9 made a strong debut in 2016 as an alternative to H.264, while HEVC decreased this year in usage by 50%.
    Despite the excitement around 4k screen resolutions, 1080p continued to prevail.
    HLS continued to be the dominant standard for adaptive bitrate (ABR) streaming, making up 71% of total ABR processing volume.
    Amazon S3 led in cloud storage market share with Akamai following behind.

    Reply
  16. Tomi Engdahl says:

    Google’s Compute Engine Now Offers Machines With Up To 64 CPU Cores, 416GB of RAM (t
    https://developers.slashdot.org/story/17/03/09/2114256/googles-compute-engine-now-offers-machines-with-up-to-64-cpu-cores-416gb-of-ram?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    Google is doubling the maximum number of CPU cores developers can use with a single virtual machine on its Compute Engine service from 32 to 64. These high-power machines are now available in beta across all of Google’s standard configurations and as custom machine types, which allow you to select exactly how many cores and memory you want. If you opt to use 64 cores in Google’s range of high-memory machine types, you’ll also get access to 416GB of RAM.

    Google’s Compute Engine now offers machines with up to 64 CPU cores, 416GB of RAM
    https://techcrunch.com/2017/03/09/googles-compute-engine-now-offers-machines-with-up-to-64-cpu-cores-416gb-of-ram/

    Google is doubling the maximum number of CPU cores developers can use with a single virtual machine on its Compute Engine service from 32 to 64. These high-power machines are now available in beta across all of Google’s standard configurations and as custom machine types, which allow you to select exactly how many cores and memory you want.

    If you opt to use 64 cores in Google’s range of high-memory machine types, you’ll also get access to 416GB of RAM. That’s also twice as much memory as Compute Engine previously offered for a single machine and enough for running most memory-intensive applications, including high-end in-memory databases.

    Running your apps on this high-memory machine will set you back $3.7888 per hour (though you do get all of Google’s usual sustained-use discounts if you run it for longer, too).

    Reply
  17. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Google’s Firebase adds support for Google Cloud Functions and gets closer integration with Google Cloud Storage

    Google enhances Firebase with Cloud Functions support, streamlined terms of service
    http://venturebeat.com/2017/03/09/google-enhances-firebase-with-cloud-functions-support-streamlined-terms-of-service/

    At its Google Cloud Next conference in San Francisco today, Google is announcing updates to its Firebase portfolio of mobile-oriented cloud services.

    First, Firebase is getting an integration with the Google Cloud Functions serverless event-driven computing service. Google unveiled Cloud Functions last year, following public cloud market leader Amazon Web Services’ (AWS) launch of Lambda, but shortly before Microsoft unveiled Azure Functions.

    Reply
  18. Tomi Engdahl says:

    In 2012 China vowed ‘OpenStack will smash the monopoly of western cloud providers!’
    And in 2017 Huawei replaced HPE as a Platinum Member of OpenStack Foundation
    https://www.theregister.co.uk/2017/03/14/huawei_become_platinum_openstack_member/

    “There’s an obvious trend to use OpenStack in China, especially in telcos.” Goode told The Register. He also sees upside in Huawei becoming a Platinum member because the company is the first Asian entity to achieve the status and “seems serious about growing the thing.”

    It’s not hard to see why Huawei sought and won Platinum status: its FusionSphere and FusionSphere Cloud rest on OpenStack. The company has also decided its enterprise business will boom, an ambition China’s government will do its best to realise with friendly purchasing policies.

    And HPE? We know its killed its Helion OpenStack cloud and pledged its public cloud future to Microsoft, as an Azure-first cloud partner and day one supplier of Azure Stack boxen.

    We also know that Cisco has killed off its “Intercloud” OpenStack public cloud effort, and that Azure, AWS, Google and Bluemix all have data centres in China. So perhaps OpenStack hasn’t quite managed to “smash the monopoly of the western cloud providers” just yet.

    Reply
  19. Tomi Engdahl says:

    Google steps over AWS, Microsoft Azure cloud data centers as first with Intel’s Xeon Skylake chips
    http://www.cablinginstall.com/articles/pt/2017/02/google-steps-over-aws-microsoft-azure-cloud-data-centers-as-first-with-intel-s-xeon-skylake-chips.html?cmpid=enl_cim_cimdatacenternewsletter_2017-03-14

    Google Inc. has stolen a march on public cloud rivals Amazon Web Services and Microsoft Azure by becoming the first provider to bring Intel Corp’s next-generation Xeon Skylake chips to its data centers. The move comes following Google’s announcement last November that it was planning to incorporate Intel’s next line of server chips into its public cloud infrastructure.

    Google Cloud Platform is the first cloud provider to offer Intel Skylake
    Friday, February 24, 2017
    https://cloudplatform.googleblog.com/2017/02/Google-Cloud-Platform-is-the-first-cloud-provider-to-offer-Intel-Skylake.html

    I’m excited to announce that Google Cloud Platform (GCP) is the first cloud provider to offer the next generation Intel Xeon processor, codenamed Skylake.

    Customers across a range of industries, including healthcare, media and entertainment and financial services ask for the best performance and efficiency for their high-performance compute workloads. With Skylake processors, GCP customers are the first to benefit from the next level of performance.

    Skylake includes Intel Advanced Vector Extensions (AVX-512), which make it ideal for scientific modeling, genomic research, 3D rendering, data analytics and engineering simulations. When compared to previous generations, Skylake’s AVX-512 doubles the floating-point performance for the heaviest calculations.

    Reply
  20. Tomi Engdahl says:

    Microsoft fires up storage-optimised Azure instances
    32 cores on a Xeon E5 V3 with up to 5.6TB of SSD, counted in gigabytes or gibibytes
    https://www.theregister.co.uk/2017/03/16/microsoft_fires_up_storageoptimised_azure_instances/

    Microsoft’s decided Azure needs virtual machines optimised for storage, so has given us all the new L-series to play with.

    They’re all running Xeon E5 V3 CPUs and go from four cores, 32 GiB of RAM and 678 GB of local SSD up to 32 cores, 356 GiB RAM and “over 5.6 TB of local SSD.”

    Microsoft says the new instance type is for “workloads that require low latency, such as NoSQL databases (e.g. Cassandra, MongoDB, Cloudera and Redis).” But The Register can’t help but think they’d also make for pretty decent virtual arrays, not least because Microsoft recently told us it has ambitions to bring more virtual arrays to Azure, perhaps to make it a more attractive desitnation for hybrid cloud storage.

    Reply
  21. Tomi Engdahl says:

    Larry Dignan / ZDNet:
    Adobe launches Experience Cloud for enterprises, and unveils Advertising Cloud to help companies manage ads across search, social, mobile, and TV

    Adobe launches Experience Cloud aims to bridge from marketing to more parts of the enterprise
    http://www.zdnet.com/article/adobe-launches-experience-cloud-aims-to-bridge-from-marketing-to-more-parts-of-the-enterprise/

    Adobe is betting its marketing and analytics knowhow will apply to a broader part of the enterprise. Adobe will now compete more directly with Salesforce, Oracle, IBM and others.

    Adobe is launching its Experience Cloud, which combines parts of its marketing, analytics and content tools, with the aim of broadening its footprint for more roles and functions of an enterprise.

    And with the launch of Experience Cloud, Adobe is going to compete more directly with the likes of Oracle and Salesforce, two marketing and analytics players focused on customer experiences.

    The analytics and data driven approaches used by marketers are touching more parts of the business. “As we dig into this further, we can extend personalization into other parts of the business,” said Lindsay.

    This turf is well trodden. Oracle, Salesforce, Microsoft and IBM all have suites focused on customer experience. The catch is that each of these players come at it from their own approaches. Adobe’s Lindsay hopes the secret sauce for Adobe is its approach to data, content and analytics.

    Advertising Cloud will synchronize cookies across its cloud to better track inventory and brand safety to make sure ads run in the right places.

    Reply
  22. Tomi Engdahl says:

    Microsoft cloud TITSUP: Skype, Outlook, Xbox, OneDrive, Hotmail down
    Total Inability To Skype Ur Parents
    https://www.theregister.co.uk/2017/03/21/microsoft_skype_outlook_onedrive_xbox_outage/

    Microsoft cloud services have dived offline, taking down Outlook, Hotmail, OneDrive, Skype, and Xbox Live.

    The problems appear to have started on Tuesday morning Pacific Time, although systems could have started to wobble earlier: basically, people were and still are unable to log into their Microsoft-hosted services.

    “Users may be intermittently unable to sign in to the service,” the Outlook.com status page admitted.

    At 1600 PT (2300 UTC), Skype, Outlook and Hotmail, and Xbox Live were back up and running, it seems. OneDrive is still knackered. “We’ve determined that the previously resolved issue had some residual impact to the service configuration for OneDrive.”

    Reply
  23. Tomi Engdahl says:

    IBM wipes away tiers to join cloud storage price wars
    Also ties up with NetApp, Veritas, Red Hat, gets hot about hybrid everything
    https://www.theregister.co.uk/2017/03/22/ibm_wipes_away_tiers_to_join_cloud_storage_price_wars/

    BM’s decided to play the “our cloud storage is even cheaper than your cloud storage” game, but by a different set of rules.

    Big Blue’s decided that tiered cloud storage assumes users make decisions based on unusually-potent-and-accurate insights into their long-term data use patterns, and therefore commit themselves to a tier of storage in the hope it’s the cheapest place to store that data. But IBM reckons modern analytics and cognitive tools mean you’ll probably start to consider archival or near-archival data more often, which it thinks will make you look pretty foolish if you have to pay to haul a heap of it out of AWS Glacier or Google Nearline to do some work.

    IBM’s therefore cooked up what it calls “Flex” storage. As the draft pricing table below shows, the product offers a flat price for cloud storage regardless of how much you access it. Big Blue’s betting that over time you’ll end up ahead, even if the per-gigabyte-per-month price is higher than that offered by rivals. The company also thinks you’ll appreciate just tossing data into the cloud, rather than having to ponder just what tier and just what region to choose.

    Big Blue’s been busy on the storage front with two other deals. One will see NetApp’s AltaVault product gain the ability to send backups to IBM Cloud Object Storage. That data could lodge in the new Cold Vault IBM’s added to its cloudy Object Storage service. Cold Vault is akin to AWS Glacier.

    Reply
  24. Tomi Engdahl says:

    Google exec warns optical networking ceiling could stall cloud growth
    http://www.cablinginstall.com/articles/pt/2017/03/google-exec-warns-optical-networking-ceiling-could-stall-cloud-growth.html?cmpid=enl_cim_cimdatacenternewsletter_2017-03-27

    At the core of all networks today are optical networks which serve as the backbone of modern connectivity. At this week’s OFC 2017 in Los Angeles, a prominent Google executive presented a talk detailing how the hyperscale network works today — and why current optical technologies need to improve, dramatically.

    Hölzle explained that Google’s cloud network started with a basic co-location approach. It then evolved to the current stage which he referred to as Cloud 2.0, which is all about virtual machines and the services that run on top of them. What’s just starting to evolve now is Cloud 3.0, which is another layer of abstraction and the beginning of serverless services.

    “You don’t even see the server or the network anymore,” Hölzle said. As the serverless network scales globally Hölzle said that Google is now looking for step functions and 10x improvements in network capacity. He noted that if demand doubles every year, than 10x only provides a little more than three years of room for growth.

    The current model for Google’s networking is to use pluggable optics, which today are commercially available in 100 Gigabit Ethernet speeds. “It’s working but it (100 GbE) is also really bottlenecking what we do,” Hölzle said. “Both the power and the cost of the solution is on the edge of what is possible.”

    Google Warns Optical Networking Limitations Could Hinder Cloud Growth
    http://www.enterprisenetworkingplanet.com/netsp/google-warns-optical-networking-limitations-could-hinder-cloud-growth.html

    “It’s working but it (100 GbE) is also really bottlenecking what we do,” Hölzle said. “Both the power and the cost of the solution is on the edge of what is possible.”

    “The density that you can get is already too low,” he added.

    Hölzle said that what Google wants is to move to some new form of optical networking module that is more compact, cheaper as well as something that is industrially manufactured.

    “The optics industry today is still a bit of an artisan craft so to speak,” Hölzle said. “If you really want to get 10x performance and get the cost to work, you have to automate the process.”

    “So if we could buy it, we’d rather have 30 cables across the Pacific and not three,” Hölzle continued. “It would be a better solution for us.”

    Hölzle also wants to see continued programmability and flattening across both the IP and optical layer of the network to enable improved manageability and agility for application deployment. Overall Hölzle said that cloud architecture increases bandwidth demands, which is why more innovation is needed at the optical layer to dramatically help hyperscale networks to grow.

    Reply
  25. Tomi Engdahl says:

    AWS emits EnginFrame 2017 for cloudy HPC
    Simpler cluster config
    https://www.theregister.co.uk/2017/03/28/aws_enginframe_for_hpc/

    Amazon Web Services’ 2016 acquisition of NICE Systems is bearing fruit, with AWS lifting the lid on the next iteration of a high performance computing service called EnginFrame.

    EnginFrame 2017 is designed to run on top of the AWS cloud, and make it simpler to deploy a Linux-based HPC cluster “in less than an hour”, AWS says.

    AWS chief evangelist Jeff Barr writes that the basis of an EnginFrame 2017 deployment is a CloudFormation template, which gives the user a consistent interface to launch new clusters.

    Available now, EnginFrame 2017 is charged according to the AWS resources consumed – EC2 instances, EFS storage and the like

    Reply
  26. Tomi Engdahl says:

    Is Intel’s data center dominance really coming to an end?
    http://www.cablinginstall.com/articles/pt/2017/03/is-intel-s-data-center-dominance-really-coming-to-an-end.html?cmpid=enl_cim_cimdatacenternewsletter_2017-03-28

    Intel (NASDAQ: INTC) has enjoyed a near-monopoly in the server chip market in recent years, with a market share of roughly 99%. Its x86 chips are the standard, and without any real competition from Advanced Micro Devices (NASDAQ: AMD) , the only other x86 chip maker, Intel has been free to enjoy its dominance. AMD will make its return to the server chip market later this year when it launches Naples, server chips built on its Zen architecture.

    Operating margin in Intel’s data center segment has routinely topped 50%, and the growth of cloud computing has driven both revenue and profits higher. During 2016, the data center segment generated $17.2 billion of revenue and a whopping $7.5 billion of operating income.

    The company is now warning that data center growth will slow and margins will contract, reflecting a return of competition to the server chip market and its plan to bring server chips to new process nodes before PC chips. Intel expects sales of its server CPUs to grow by just 6% annually through 2021, and for its data center operating margin to drop to the low- to mid- 40% range.

    These lower estimates may not be pessimistic enough.

    Intel’s Data Center Monopoly Is Coming to an End
    http://www.nasdaq.com/article/intels-data-center-monopoly-is-coming-to-an-end-cm764734

    Intel (NASDAQ: INTC) has enjoyed a near-monopoly in the server chip market in recent years, with a market share of roughly 99%. Its x86 chips are the standard, and without any real competition from Advanced Micro Devices (NASDAQ: AMD) , the only other x86 chip maker, Intel has been free to enjoy its dominance.

    Operating margin in Intel’s data center segment has routinely topped 50%, and the growth of cloud computing has driven both revenue and profits higher. During 2016, the data center segment generated $17.2 billion of revenue and a whopping $7.5 billion of operating income.

    The return of AMD

    AMD’s server chip business has been insignificant for quite some time. The company was once a major force , with a roughly 26% unit share of the x86 server chip market in 2006. But it has been outclassed by Intel ever since, driving its market share down near zero.

    AMD will make its return to the server chip market later this year when it launches Naples, server chips built on its Zen architecture. AMD has already launched Ryzen, the PC version of Zen, and while reviews have been mixed , AMD has significantly closed its performance gap with Intel. AMD still can’t compete when it comes to single-threaded performance, but Ryzen’s copious cores and competitive pricing make the chips a clear winner for certain workloads.

    Naples will come with up to 32 cores, and AMD is touting memory bandwidth and input/output capacity as the key selling points. Compared to a comparable Intel Xeon chip, AMD claims that Naples will have 45% more cores, 60% more I/O capacity, and 122% more memory bandwidth. If Ryzen is any indication, Naples will certainly be superior for some workloads, but it’s unlikely to best Intel in general.

    Still, Naples should make AMD a player in the server chip market once again, forcing Intel to compete for the first time in years. Exactly how much market share AMD will be able to win is anyone’s guess, but it’s all downside for Intel.

    The push toward an architecture-agnostic cloud

    Beyond AMD and x86 chips, there are two other threats to Intel’s server chip monopoly. ARM chips are finally making their way to the data center, with Microsoft recently announcing that its Project Olympus server design now supports both x86 and ARM chips. The company is already testing ARM chips from Qualcomm and Cavium for search, storage, and machine learning, and it has created a version of Windows Server for ARM processors.

    Microsoft isn’t the only cloud computing company looking to lower its dependence on Intel. Alphabet ‘s Google announced last year that it was developing an open server architecture that supports the upcoming POWER9 processor from IBM

    The push from the major cloud infrastructure providers to support different architectures isn’t surprising. Shifting from a world where Intel is the only option to a world where there are a multitude of options will surely bring server chip prices down, allowing cloud computing to get cheaper and further eat into the traditional server market.

    Reply
  27. Tomi Engdahl says:

    Report: Google will launch 3 more cloud data center regions before 2019
    http://www.cablinginstall.com/articles/pt/2017/03/report-google-will-launch-3-more-cloud-data-center-regions-before-2019.html?cmpid=enl_cim_cimdatacenternewsletter_2017-03-28

    At the Google Cloud Next conference (March 8-10) in San Francisco, Google reportedly announced that it will open three more regions of data centers around the world by the end of 2018. The new facilities will be coming to the Netherlands, Canada, and California, said Urs Holzle, Google’s senior vice president of technical infrastructure.

    Reply
  28. Tomi Engdahl says:

    Hybrid Cloud Storage Delivers Performance and Value
    http://www.linuxjournal.com/content/hybrid-cloud-storage-delivers-performance-and-value?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    But if your enterprise could blend the two–on-site and cloud storage–in a way that helped manage down costs while providing for your enterprise availability, scalability, security and compliance needs, it would provide a viable solution. Hybrid Cloud Storage (HCS) creates a perfect method for your enterprise to place data exactly where it makes sense, depending on its class, and it helps manage costs effectively.

    To derive the most benefit from HCS, look for a partner that provides deployment options based on your data workloads and that uses the same technology for on-site and cloud storage. This will make management of data and balancing much easier than trying to blend different technologies for cloud and on-site storage.

    Are you making the most of your hybrid cloud?
    http://www-03.ibm.com/systems/uk/storage/hybrid-cloud-storage/complete/?cm_mmc=Earned-_-Systems_Systems+-+Hybrid+Cloud+Storage-_-GB_GB-_-HCS-inside-article-LinuxJournal-Post8&cm_mmca1=000016CV&cm_mmca2=10003252

    Extend your infrastructure with new hybrid cloud capability for all of your data and storage.

    See how IBM’s strategic approach handles more data, cloud vendors and storage systems than any other solution.

    Reply
  29. Tomi Engdahl says:

    AWS plants fresh roots in green-leaning Sweden
    Fourth EU region planned with three availability zones
    https://www.theregister.co.uk/2017/04/04/aws_sweden_region/

    Amazon will expand the data centre footprint of AWS with the opening of a region in Sweden next year.

    The giant announced on Tuesday plans for an AWS EU Stockholm region comprised of three availability zones.

    The Stockholm region will be AWS’s fourth EU region, following London, which opened at the end of 2016, Frankfurt and Ireland.

    The London region is understood to have been thrown up on hardware that AWS has leased from local data centre partners.

    Reply
  30. Tomi Engdahl says:

    Jay Greene / Wall Street Journal:
    Data center arms race: top cloud computing firms Amazon, Alphabet, and Microsoft spent a combined $31.5B in 2016 on capital expenses and leases, up 22% from ’15

    Tech’s High-Stakes Arms Race: Costly Data Centers
    Top three cloud-computing firms have spent $31.5 billion in 2016 on capital expenses and leases
    https://www.wsj.com/articles/techs-high-stakes-arms-race-costly-data-centers-1491557408?mod=e2fbd

    Just as oil and gas companies plow billions of dollars in searching for new energy reserves, big technology companies are spending lavishly on a global footprint of sophisticated computers

    Reply
  31. Tomi Engdahl says:

    Nick Pappageorge / CB Insights:
    Amazon building new business pillars in AI, next-gen logistics, and enterprise cloud, a deep dive into firms’s M&A, investment, jobs, patent, other data shows — Seattle-based Amazon is doubling down on AWS and its AI assistant, Alexa. It’s seeking to become the central provider for AI-as-a-service.

    Amazon Strategy Teardown: Building New Business Pillars In AI, Next-Gen Logistics, And Enterprise Cloud Apps
    https://www.cbinsights.com/blog/amazon-strategy-teardown/

    Seattle-based Amazon is doubling down on AWS and its AI assistant, Alexa. It’s seeking to become the central provider for AI-as-a-service.

    Amazon is the exception to nearly every rule in business. Rising from humble beginnings as a Seattle-based internet bookstore, Amazon has grown into a propulsive force in at least five different giant industries: retail, logistics, consumer technology, cloud computing, and most recently, media and entertainment. The company has had its share of missteps — the expensive Fire phone flop comes to mind — but is also rightly known for strokes of strategic genius that have put it ahead of competitors in promising new industries.

    This was the case with the launch of cloud business AWS in the mid-2000s, and more recently the surprising consumer hit in the Echo device and its Alexa AI assistant. Today’s Amazon is far more than just an “everything store,” it’s a leader in consumer-facing AI and enterprise cloud services. And its insatiable appetite for new markets mean competitors must always be on guard against its next moves.

    As the biggest online retailer in America, the company accounts for 5% of all retail spending in America, and the company has been publicly traded for two decades.

    Reply
  32. Tomi Engdahl says:

    The 10 best cloud and data center conferences to attend in 2017
    http://www.cablinginstall.com/articles/pt/2017/04/the-10-best-cloud-and-data-center-conferences-to-attend-in-2017.html?cmpid=enl_cim_cimdatacenternewsletter_2017-04-25

    Data centers are quickly changing the way information and business are being shared, collected and stored in the world today. To help better understand how this new data-rich environment works, it’s important to attend some of the largest data center conferences & events for 2017.

    Reply
  33. Tomi Engdahl says:

    Tom Krazit / GeekWire:
    AWS reports 42% YoY jump in revenue to $3.66B, up from $2.57B a year ago, as growth slows slightly — The crown jewel of Amazon’s business, Amazon Web Services, posted a 42 percent jump in revenue during the first fiscal quarter of 2017, as it continues to set the standard for cloud computing.

    AWS revenue up 42 percent to $3.66 billion in Q1 2017, operating income reaches $890 million
    http://www.geekwire.com/2017/aws-revenue-42-percent-3-66-billion-q1-2017-operating-income-reaches-890-million/

    Amazon.com:
    Amazon posts Q1 revenue of $35.71B, up 23% YoY, as operating income declines 6% YoY to $1B — Amazon.com, Inc. (NASDAQ: AMZN) today announced financial results for its first quarter ended March 31, 2017. — Operating cas

    Amazon.com Announces First Quarter Sales up 23% to $35.7 Billion
    http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=2266665

    Reply
  34. Tomi Engdahl says:

    Larry Dignan / ZDNet:
    Microsoft productivity business unit had $8B in revenues, up 22% YoY; LinkedIn added $975M to unit’s sales in a first full quarter since acquisition — But Azure revenue remains a mystery. As usual Office and enterprise tools carry the quarter. — Microsoft said its commercial cloud revenue …

    Microsoft’s Q3 strong as commercial cloud revenue hits $15.2 billion run rate
    But Azure revenue remains a mystery. As usual, Office and enterprise tools carry the quarter.
    http://www.zdnet.com/article/microsofts-q3-strong-as-commercial-cloud-revenue-hits-15-2-billion-run-rate/

    Reply
  35. Tomi Engdahl says:

    Microsoft:
    Microsoft reports Q3 2017 revenue of $22.1B as its Intelligent Cloud business hits $6.8B, driven by 93% YoY growth of Azure; Surface revenue down 26% YoY

    Microsoft Cloud strength highlights third quarter results
    Read more at https://news.microsoft.com/2017/04/27/microsoft-cloud-strength-highlights-third-quarter-results-2/#BPZIszGhIxWyBFWh.99

    Reply
  36. Tomi Engdahl says:

    Reuters:
    Google’s non-ad business categorized as “Other Revenue”, including cloud, Play store, Pixel, posted a 49.4% YoY jump to $3.1B, representing 13% of total revenue — Alphabet Inc’s non-advertising business, which houses its cloud unit, Pixel smartphones and the Play store …

    Google’s search for non-ad revenue puts spotlight on cloud, Pixel
    http://www.reuters.com/article/us-alphabet-results-cloud-idUSKBN17U1N7

    Alphabet Inc’s non-advertising business, which houses its cloud unit, Pixel smartphones and the Play store

    categorized as “Other Revenue” in its earnings report, posted a 49.4 percent jump in revenue to $3.10 billion

    The business now represents about 13 percent of Alphabet’s total revenue

    To be sure, Google’s cloud venture is still much smaller than market leader Amazon.com Inc’s Amazon Web Services and Microsoft Corp’s Azure.

    But Google is investing heavily.

    Amazon’s cloud business grew 43 percent to $3.66 billion in the first quarter. Microsoft’s cloud unit grew 93 percent.

    “We believe Google will continue to gain traction in the cloud market

    Reply
  37. Tomi Engdahl says:

    Salesforce signs multi-year deal to use Dell infrastructure in its datacenters
    Dell also announced that Salesforce plans to equip its 25,000 employees with Dell Latitude laptops.
    http://www.zdnet.com/article/salesforce-signs-multi-year-deal-to-use-dell-infrastructure-in-its-datacenters/

    Dell Technologies announced Thursday that Salesforce is signing a multi-year commitment to use Dell EMC infrastructure in its global data center footprint.

    According to Dell, the commitment is an expansion of the current relationship between the two companies, and will have Salesforce utilizing Dell’s Isilon scale-out NAS storage arrays, data protection products and PowerEdge servers. Dell also announced that Salesforce plans to equip its 25,000 employees with Dell Latitude laptops.

    As for the infrastructure plans, it’s unclear how the datacenter commitment relates to Salesforce’s use of Amazon Web Services and cloud-based infrastructure.

    Reply
  38. Tomi Engdahl says:

    Hybrid cloud: The ‘new’ but not-new IT service platform
    The combination effect
    https://www.theregister.co.uk/2017/05/04/is_hybrid_cloud_the_new_it_service_platform/

    Today, the term hybrid IT is typically used when talking about bridging IT on multiple premises. But this is an oversimplification. Buried deep within any hybrid IT discussion will be a need to talk about standards, compliance and some difficult decisions about how we even conceptualize our approach to IT.

    As a marketing term “hybrid-anything” means the integration of two things that were previously separate. A hybrid storage array contains both flash and magnetic media. Hybrid WAN networking is a network topology containing more than one connection type, for example MPLS and an internet-based tunnel.

    In 2017, however, when we talk about “hybrid IT” we’re talking about a combination of on-premises IT and public cloud IT. Sometimes we might even throw in service-provider hosted IT as well. But we should always be clear what sort of hybridization we are talking about.

    IT is already hybrid

    Throwing the word “hybrid” at it implies that we should think of multi-premises IT as a novelty bringing dramatic changes, especially in ease of use. Like smooshing together a PDA, MP3 player and mobile phone changed the world with the smartphone, hybrid IT will change the face of IT!

    However, there is nothing special, novel or unique about hybrid IT. It isn’t something that you should consider doing. It isn’t something you need to draft long term plans for. Hybrid IT is, except in exceptionally niche cases, something you are already doing.

    IT services can be broken into three broad categories. Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS).

    IaaS has no special sauce. Push button, receive operating system. It doesn’t matter whether you’re on premises, in AWS or a service provider cloud, it’s just a VM with an OS in it. PaaS requires a little bit more attention

    IaaS and PaaS are easy to do as a multi provider affair.

    SaaS is different. SaaS has traditionally been something that consumed as a contract with the SaaS developer directly, and where the platform that they used was not discussed. One would get a Dropbox subscription, but wouldn’t specify “Dropbox on Google Cloud”.

    This is starting to change. Smaller vendors are taking advantage of the marketplaces offered by public, private and service provider clouds. Salesforce, for example, is large enough to bully customers into ignoring which cloud provider they use, but provides services on all the major public clouds.

    Cloud brokerage

    IaaS and PaaS are not inherently bad.

    This has created a market for cloud brokers – third parties that will find you the best place to run your workloads based on criteria you select. These criteria can be price, latency, data sovereignty, data locality, regulatory certification and so forth.

    As provider prices and capabilities change, the cloud broker will advise clients to move workloads. Depending on how deeply integrated the cloud broker’s software is with the customer’s infrastructure, they may even be able to trigger workload migrations for the customer.

    It is, of course, possible to build a virtual cloud broker as a software widget.

    The ability to easily move workloads from A to B is a prerequisite to play the multi-premises IT game properly. As a bare minimum there needs to be a way to get data and configurations from one place to another.

    This begins a discussion about standards.

    SaaS should “just work” between different providers, but often doesn’t.

    Hybrid IT is what we’re doing right now, today. Collectively, we will only increase the diversification of workload placement with time.

    Reply
  39. Tomi Engdahl says:

    RedLock Emerges from Stealth With Cloud Security Platform
    http://www.securityweek.com/redlock-emerges-stealth-cloud-security-platform

    Cloud security startup RedLock emerged from stealth mode on Tuesday with a cloud infrastructure security offering and $12 million in funding from several high profile investors.

    According to the company, its RedLock Cloud 360 platform is designed to help organizations manage security and compliance risks in their public cloud infrastructure without having a negative impact on DevOps.

    The company says its product can help security teams identify risks in their cloud infrastructure by providing comprehensive visibility into workloads and the connections between user activity, network traffic, configurations, and threat intelligence data. The solution works across multiple public cloud services, such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform.

    http://redlock.io/

    Reply
  40. Tomi Engdahl says:

    The evolution of data center infrastructure in North America
    http://www.controleng.com/single-article/the-evolution-of-data-center-infrastructure-in-north-america/9c061a12c0e8b05dee9dd7280032c95c.html

    Data centers have become increasingly important under the Industrial Internet of Things revolution. Physical and cybersecurity have to be assessed and continuously improved. What are the most crucial considerations for the IT infrastructure of a data center?

    Is the data center secure enough?

    With rising cyber security concerns, protecting servers and information assets in data centers is critical. Security—both physical and cyber—has to be assessed, continuously improved and new systems may need to be put in place to increase the security posture in this sector. IT operations are a crucial aspect of most organizational operations around the world.

    How to cool the data centers down?

    A number of data center hosts are selecting geographic areas that take advantage of the cold climate to mitigate the extensive costs of cooling their server infrastructure. As data centers pack more computing power, managing the significant heat that the semi-conductors generate is consuming more and more of the operating costs of a data center; consumption is at approximately 2% of U.S. total power consumption.

    Public, private or hybrid: What’s best for you data?

    For companies that continue to own and operate their own data center, their servers are used for running the Internet and intranet services needed by internal users within the organization, e.g., e-mail servers, proxy servers, and domain name system (DNS) servers. Network security elements should be deployed: firewalls, virtual private network (VPN) gateways, situational awareness platforms, intrusion detection systems, etc. An on-site monitoring system for the network and applications also should be deployed to provide insight to hardware health, multi-vendor device support, automated network device discovery and quick deployment. In addition, off-site monitoring systems can be implemented to provide a holistic view of the LAN and WAN performance.

    Data center infrastructure management

    Data center infrastructure management (DCIM) is the integration of information technology (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center’s critical systems.

    Data center boom in North America

    The combination of cheap power and cold weather puts Canada and upper regions of the United States in a similar class with Sweden and Finland, which host huge data centers for Facebook and Google.

    Reply
  41. Tomi Engdahl says:

    Amazon AWS holds onto lead in cloud infrastructure
    http://www.cloudpro.co.uk/leadership/cloud-essentials/6805/amazon-aws-holds-onto-lead-in-cloud-infrastructure

    Canalys figures show solid growth in cloud infrastructure

    Amazon’s AWS holds the lead on cloud infrastructure, as the market as a whole grew by 42%.

    That’s according to analyst firm Canalys, which revealed the worldwide cloud infrastructure services market grew by 42% year-on-year in the first quarter of 2017, to a total size of $11 billion.

    While Amazon held onto the lead with 31% of the market by value, its growth of 43% was slower than Microsoft and Google, up 93% and 74% respectively.

    “Competition for enterprise customers is intensifying among leading cloud service providers, which are investing heavily to secure key national and global accounts,” said Canalys analyst Daniel Liu. “Timing is crucial, as many large accounts are assessing, formulating and executing strategies to move existing workloads and infrastructure to the cloud, and develop new types of workloads as part of digital transformation initiatives.”

    Reply
  42. Tomi Engdahl says:

    Microsoft Azure almost doubles infrastructure cloud market presence
    Familiarity – and dual-cloud strategies – breeds growth
    https://www.theregister.co.uk/2017/05/16/iaas_competition_intensifies/

    Competition for enterprise IT spend is intensifying with Microsoft and Google applying pressure to AWS.

    Microsoft’s share of the cloud infrastructure market nearly doubled in the first three months of this year, according to analysts Canalys.

    Microsoft managed IaaS market growth of 93 per cent to just under $1.5bn compared to the same period a year ago.

    Also expanding quickly was Google, which – invigorated under the leadership of ex-VMware chief Diane Green – increased its share by 74 per cent to over $500m.

    Both come from lower starting points with AWS remaining the dominant provider, which means it’s growing relatively slowly.

    Amazon’s IaaS grew 43 per cent to $3.5bn, Canalys found.

    Reply
  43. Tomi Engdahl says:

    Azure becomes double DaaS-aster zone as VMware loads up
    Microsoft’s weird DaaS licensing melts away when it has a sniff of Azure usage
    https://www.theregister.co.uk/2017/05/17/vmware_horizon_daas_in_azure/

    VMware’s got the green light to deliver virtual Windows desktops and packaged apps from Microsoft’s Azure cloud.

    Citrix is already there, having revealed its efforts back in March 2017. But VMware thinks its Horizon software can do a little better by offering a single control plane capable of managing virtual desktops across multiple platforms, be they on-premises or in different clouds. VMware already has a DaaS arrangement with Bluemix, so is offering the prospect of one console in which to keep a steady hand on lots of virtual desktops even if they’re scattered around the globe.

    Virtzilla also reckons it has licensing nailed, so you’ll be able to run virtual desktops in different bit barns without falling foul of Microsoft’s Licensing Corps.

    Reply
  44. Tomi Engdahl says:

    Zack Kanter / TechCrunch:
    How Amazon is eliminating internal inefficiencies and avoiding technological stagnation by exposing its internal operations to external competition — I co-founded a software startup in December. Each month, I send out an update to our investors to keep them updated on our progress.

    Why Amazon is eating the world
    https://techcrunch.com/2017/05/14/why-amazon-is-eating-the-world/

    Reply
  45. Tomi Engdahl says:

    The evolution of data center infrastructure in North America
    http://www.controleng.com/single-article/the-evolution-of-data-center-infrastructure-in-north-america/9c061a12c0e8b05dee9dd7280032c95c.html

    Data centers have become increasingly important under the Industrial Internet of Things revolution. Physical and cybersecurity have to be assessed and continuously improved. What are the most crucial considerations for the IT infrastructure of a data center?

    Reply
  46. Tomi Engdahl says:

    Microsoft opens Azure India to the world, not just Indian users
    Those of you targeting Indian users can now do so with lower latency and local data storage
    https://www.theregister.co.uk/2017/04/12/azure_india_open_to_global_users/

    Microsoft’s opened its three Indian Azure data centres to the world.

    Azure India kicked off in September 2015 but at the time Microsoft noted that “The India regions are currently available to volume licensing customers and partners with a local enrollment in India”, adding that “The India regions will open to direct online Azure subscriptions in 2016.”

    It looks like Redmond missed that deadline by a few months, because on April 11th the company announced that “global companies can now benefit from access to the three Azure regions in India.”

    Reply
  47. Tomi Engdahl says:

    Cloud giants ‘ran out’ of fast GPUs for AI boffins
    Capacity droughts hit just before conference paper deadlines, say researchers
    https://www.theregister.co.uk/2017/05/22/cloud_providers_ai_researchers/

    Top cloud providers struggled to provide enough GPUs on-demand last week, AI experts complained to The Register.

    As a deadline for research papers loomed for a major conference in the machine-learning world, teams around the globe scrambled to rent cloud-hosted accelerators to run tests and complete their work in time to submit their studies to be included in the event.

    That, we’re told, sparked a temporary shortage in available GPU capacity.

    Graphics processors are suited for machine learning, as they can perform the necessary vector calculations for neural networks extremely fast in parallel, compared to generic application CPUs.

    Reply
  48. Tomi Engdahl says:

    Timothy B. Lee / Vox:
    Google, Microsoft, Amazon, and others bolster cutting-edge AI tools for third-party developers, setting up the next tech platform war

    Artificial intelligence is getting more powerful, and it’s about to be everywhere
    https://www.vox.com/new-money/2017/5/18/15655274/google-io-ai-everywhere

    There wasn’t any one big product announcement at Google I/O keynote on Wednesday, the annual event when thousands of programmers meet to learn about Google’s software platforms. Instead, it was a steady trickle of incremental improvements across Google’s product portfolio. And almost all of the improvements were driven by breakthroughs in artificial intelligence — the software’s growing ability to understand complex nuances of the world around it.

    Companies have been hyping artificial intelligence for so long — and often delivering such mediocre results — that it’s easy to tune it out. AI is also easy to underestimate because it’s often used to add value to existing products rather than creating new ones.

    But even if you’ve dismissed AI technology in the past, there are two big reasons to start taking it seriously. First, the software really is getting better at a remarkable pace. Problems that artificial intelligence researchers struggled with for decades are suddenly getting solved

    “Our software is going to get superpowers” thanks to AI, says Frank Chen, a partner at the venture capital firm Andreessen Horowitz. Computer programs will be able to do things that “we thought were human-only activities: recognizing what’s in a picture, telling when someone’s going to get mad, summarizing documents.”

    Reply
  49. Tomi Engdahl says:

    AWS signs Java ‘father’ James Gosling
    https://venturebeat.com/2017/05/22/aws-signs-java-father-james-gosling/

    Amazon Web Services has added another computer science heavyweight to its employee roster. James Gosling, often referred to as the “Father of Java,” announced on Facebook Monday that he would be joining the cloud provider as a distinguished engineer.

    Gosling came up with the original design of Java and implemented its first compiler and virtual machine as part of his work at Sun Microsystems. He left Sun in 2010 after the company was acquired by Oracle, spent a short time at Google, and most recently worked at Liquid Robotics designing software for an underwater robot.

    Reply
  50. Tomi Engdahl says:

    IBM gathering cloud capacity via new data centers
    http://www.cablinginstall.com/articles/pt/2017/05/ibm-gathering-cloud-capacity-via-new-data-centers.html?cmpid=enl_cim_cimdatacenternewsletter_2017-05-23

    IBM is beefing up its cloud capacity with four new data centers in the United States to meet growing demand and accommodate internet of things, blockchain and quantum computing applications. Two of the new facilities will be in Dallas and two will be in the Washington, D.C., region.

    IBM powers up new data centers
    https://gcn.com/articles/2017/05/05/ibm-data-centers.aspx

    These two new DC facilities give IBM a total of five data centers in the capital region, including one dedicated entirely to government business.

    Each of the four new facilities offers a full range of cloud infrastructure services, including bare metal servers, virtual servers, storage, security services and networking. With services deployed on demand and full remote access and control, customers can tailor public, private or hybrid cloud environments to suit their needs.

    The new WDC07 location houses 10,000 servers, and WDC06 has a 6,000 server capacity. The bandwidth in the new centers will be about five times more than what is seen in traditional switching, Romero said, and the servers use both NVIDIA GPUs and IBM’s latest generation of Power9 CPUs.

    These new FedRAMP-compliant data centers will help IBM meet government demand for cloud services, which is increasing at about the same rate as IBM’s overall growth rate, Romero said.

    Statista numbers, reported by Forbes, show that overall spending on cloud infrastructure could grow from $38 billion in 2016 to $176 billion in 2026.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*