Who's who of cloud market

Seemingly every tech vendor seems to have a cloud strategy, with new products and services dubbed “cloud” coming out every week. But who are the real market leaders in this business? Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article shows Gartner’s Magic Quadrant for IaaS. Research firm Gartner’s answer lies in its Magic Quadrant report for the infrastructure as a service (IaaS) market.

It is interesting that missing from this quadrant figure are big-name companies that have invested a lot in the cloud, including Microsoft, HP, IBM and Google. The reason is that report only includes providers that had IaaS clouds in general availability as of June 2012 (Microsoft, HP and Google had clouds in beta at the time).

Gartner reinforces what many in the cloud industry believe: Amazon Web Services is the 800-pound gorilla. Gartner has also found one big minus on Amazon Web Services: AWS has a “weak, narrowly defined” service-level agreement (SLA), which requires customers to spread workloads across multiple availability zones. AWS was not the only provider where there was something negative to say on the service-level agreement (SLA) details.

Read the whole Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article to see the Gartner’s view on clould market today.

1,065 Comments

  1. Tomi Engdahl says:

    How low oil prices are depressing Houston’s data center market
    http://www.cablinginstall.com/articles/pt/2016/08/how-low-oil-prices-are-depressing-houston-s-data-center-market.html?cmpid=Enl_CIM_DataCenters_August92016&eid=289644432&bid=1491747

    The Houston Business Journal’s Joe Martin reports that ” the Houston data center market has tempered off in the first half of 2016 as low oil prices continue to hamper sectors driven by the oil and gas industry.”

    According to JLL’s North American data center report, Houston has seen a slowdown in supply absorption for data centers. “We don’t have new companies, and we don’t have expansion of older entrenched companies (in the data center market),” said Bo Bond, central region lead for JLL’s data center solutions group.

    MIT: Scalable cloud infrastructure set to transform data center
    http://www.cablinginstall.com/articles/pt/2016/07/mit-scalable-cloud-infrastructure-set-to-transform-data-center.html?cmpid=Enl_CIM_DataCenters_August92016&eid=289644432&bid=1491747

    “Software-defined convergence is analogous to the smartphone,” observes Don Frame, Data Center Group brand director, Lenovo North America. “Back in the day, we used to wear utility belts at work with a phone, PDA, pager, and calculator. Now all those devices are software defined and reside on one device, much like software-defined convergence has transformed the data center.”

    – “A scalable cloud infrastructure changes the way IT services are delivered to end users. The cloud deployment model provides an infrastructure that grows or recedes based on demand, opening the door for data center automation to automatically shift workloads from overburdened clusters to underutilized assets.”

    – “Today, IT teams need to break down the old ideologies of hardware and seize the opportunities offered by cloud deployment.”

    Reply
  2. Tomi Engdahl says:

    World hyperscale data center market: Opportunities and forecasts to 2022
    http://www.cablinginstall.com/articles/pt/2016/08/world-hyperscale-data-center-market-opportunities-and-forecasts-to-2022.html?cmpid=Enl_CIM_DataCenters_August92016&eid=289644432&bid=1491747

    Research and Markets has announced the addition of the “World Hyperscale Data Center Market – Opportunities and Forecasts, 2014 – 2022″ report to their offering.

    The main idea behind developing hyperscale architecture is to start with small infrastructure to keep the initial investments minimal. With increasing demand, new nodes can be added to the cluster to expand the initial infrastructure. Hyperscale data centers are largely adopted by key companies, such as Amazon and Google, thereby emerging as one of the fastest growing technology in the IT infrastructure world.

    Efficiency is one of the key factors to be considered in a hyperscale data center, in addition to the design and layout of the facility. Designing is a very important factor as it helps to minimize the inefficiency at the rack, node, and facility level.

    Hyperscale data centers usually have compute nodes ranging from thousands to tens of thousands.

    Data center architecture lessons per Isaac Newton
    http://www.cablinginstall.com/articles/pt/2016/08/data-center-architecture-lessons-from-isaac-newton.html?cmpid=Enl_CIM_DataCenters_August92016&eid=289644432&bid=1491747

    Sir Isaac Newton remains our favorite source for axiomatic laws of physics, despite giving us the language of calculus. Particularly relevant for today’s discussion is Newton’s third law as formally stated: “For every action, there is an equal and opposite reaction.”

    In the cosmology of the data center, this existentially proves itself in the network whenever there are significant changes in application infrastructure and architectures. As evidence, consider the reaction to first, virtualization, and now, containerization, APIs, and microservice architectures.

    These changes, while improving speed and agility of application development and delivery, have created greater mass in the data center, essentially changing the center of gravity and pulling many network services toward it.

    The result is a growing application network that is separate from “the network” in which developers and operations (DevOps) tends to provision, manage, and deploy not just the services and apps, but the network services necessary to support them.

    But the core network, and the need for it, has not diminished. Indeed, it has grown to epic importance to the business, as the core network becomes the primary lifeline through which all data flows, both inbound and out. Should that lifeline falter, or slow, business will be impaired. Productivity will plummet, and profit will plunge. Brand reputation will suffer

    Reply
  3. Tomi Engdahl says:

    In 2014 Malcolm Turnbull said ‘Nobody likes outages’ in the cloud
    http://www.theregister.co.uk/2016/08/09/censusfail/

    Redundancy is important, said Australian PM. So what does he think of Australia’s cloud-hosted census failing?

    Turnbull launched DiData’s cloud just four months after the company’s cloud experienced a two-day outage. So at the launch of the government cloud, your correspondent asked Turnbull what he thought of that event.

    My notes from the day record Turnbull saying “Nobody likes systems failing or any kind of outage,” before adding that the “important thing is to build in appropriate levels of redundancy and to learn from the incident to ensure that it does not happen again, or the likelihood is greatly reduced.”

    I mention that launch and Turnbull’s words in light of the failure of Australia’s census, which was hosted in the cloud.

    Reply
  4. Tomi Engdahl says:

    Dan Richman / GeekWire:
    Google Cloud Platform releases three enterprise cloud database services from beta: Cloud SQL, Cloud Bigtable, and Cloud Datastore — Google Cloud Platform today released three database services from beta, signaling that they’re ready for production use. Cloud SQL, Cloud Bigtable …

    Google Cloud Platform releases new database services, fighting AWS and Azure for corporate customers
    http://www.geekwire.com/2016/google-cloud-platform-releases-new-database-services-fighting-aws-azure-corporate-customers/

    Google Cloud Platform today released three database services from beta, signaling that they’re ready for production use. Cloud SQL, Cloud Bigtable and Cloud Datastore are all now in general release, Google said in a blog post.

    The move brings Google up to where Amazon Web Services and Microsoft Azure have been for some time: positioned to handle the routine but vital database needs of corporate clients. Though Google Cloud Platform excels at AI and machine learning, it has been slow to expand into this essential area.

    “Today marks a major milestone in our tremendous momentum and commitment to making Google Cloud Platform the best public cloud for your enterprise database workloads,” Dominic Preuss, lead product manager for storage and databases, wrote in the post.

    Reply
  5. Tomi Engdahl says:

    Google adds SQL Server to its cloudy database collection
    Nearline storage mystery deepens as Cloud SQL, Datastore and Bigtable go live
    http://www.theregister.co.uk/2016/08/19/google_adds_sql_server_to_its_cloudy_database_collection/

    Google’s cloud has grown more database options.

    The big move for those in the former camp is the addition of Microsoft’s SQL Server to Google Compute Engine, a decision the company says was made because “Our top enterprise customers emphasize the importance of continuity for their mission-critical applications.”

    It’s not hard to unpack that sentence as Google having been told it has a fine cloud, but not so fine that users would re-tool applications to get into it. Adding SQL Server brings Google to parity with its main rivals, not a bad thing.

    The company is also trying to leapfrog those rivals with its own databases. It’s now got three and they’re all generally available.

    Cloud SQL is a cloudified cut of MySQL 5.7. Google reckons it’s seriously fast and has slim latency. But then they would say that, wouldn’t they?

    Cloud Datastore is a NoSQL document database and Cloud Bigtable is an Apache HBase client-compatible NoSQL wide-column database service.

    Reply
  6. Tomi Engdahl says:

    Go Boldly to the Cloud: Embracing the Security Benefits of the Cloud Infrastructure
    http://www.securityweek.com/go-boldly-cloud-embracing-security-benefits-cloud-infrastructure

    Less than ten minutes driving west from my home, you encounter a vast expanse of large, windowless buildings. Situated near them are impressive physical plants dedicated to cooling these buildings and providing back-up power in the case of a power failure. Whenever I drive past these complexes I always point them out to my passengers and say: “You have heard about the cloud – well, there it is.”

    Businesses are moving mission-critical applications to the cloud at a rapid pace. The cost savings and other benefits simply are too persuasive not to move to the cloud. So why do organizations hesitate? Analyst studies cite security concerns as the number one inhibitor of moving sensitive applications to the cloud.

    Let me examine these concerns by breaking down the conversation into two pieces: the cloud infrastructure and the applications running in the cloud.

    I was once concerned that moving to the cloud was fraught with unknown perils. Then I walked into a cloud security panel of really smart, progressive security types at the RSA Conference in 2014 called “Is the Cloud Really More Secure Than On-Premise?” No less a luminary than Bruce Schneier told the audience to essentially wise up and realize that established cloud providers had more security resources and expertise than any enterprise, and that they provide security that is comparable to or exceeds that of any enterprise.

    Reply
  7. Tomi Engdahl says:

    Cloudflare Faces Lawsuit For Assisting Pirate Sites
    https://yro.slashdot.org/story/16/08/24/1639200/cloudflare-faces-lawsuit-for-assisting-pirate-sites

    In recent months CloudFlare has been called out repeatedly for offering its services to known pirate sites, including The Pirate Bay. These allegations have now resulted in the first lawsuit after adult entertainment publisher ALS Scan filed a complaint against CloudFlare at a California federal court. [...] Copyright holders are not happy with CloudFlare’s actions. Just recently, the Hollywood-affiliated group Digital Citizens Alliance called the company out for helping pirate sites to stay online. Adult entertainment outfit ALS Scan agrees and has now become the first dissenter to take CloudFlare to court. In a complaint filed at a California federal court, ALS describes piracy as the greatest threat to its business.

    Cloudflare Faces Lawsuit For Assisting Pirate Sites
    By Ernesto on August 23, 2016
    https://torrentfreak.com/cloudflare-faces-lawsuit-for-assisting-pirate-sites-160823/

    In recent months CloudFlare has been called out repeatedly for offering its services to known pirate sites, including The Pirate Bay. These allegations have now resulted in the first lawsuit after adult entertainment publisher ALS Scan filed a complaint against CloudFlare at a California federal court.

    As one of the leading providers of DDoS protection and an easy to use CDN service, Cloudflare is used by millions of sites across the globe.

    This includes many “pirate” sites who rely on the U.S. based company to keep server loads down.

    The Pirate Bay is one of the best-known customers, but there are literally are thousands of other ‘pirate’ sites that use services from the San Francisco company.

    As a result, copyright holders are not happy with CloudFlare’s actions. Just recently, the Hollywood-affiliated group Digital Citizens Alliance called the company out for helping pirate sites to stay online.

    “The problems faced by ALS are not limited to the growing presence of sites featuring infringing content, or ‘pirate’ sites. A growing number of service providers are helping pirate sites thrive by supporting and engaging in commerce with these sites,” ALS writes

    These service providers include hosting companies, CDN providers, but also advertising brokers. The lawsuit at hand zooms in on two of them, CloudFlare and the advertising provider Juicy Ads.

    CloudFlare and Juicy Ads’ terms state that they terminate accounts of repeat infringers. However, according to ALS both prefer to keep these sites on as customers, so they can continue to profit from them.

    Reply
  8. Tomi Engdahl says:

    VMware goes back to its future with multi-cloud abstractions
    Virtzilla’s going to bet you’ve got server sprawl all over again, this time in the cloud
    http://www.theregister.co.uk/2016/08/29/vmware_cloud_foundation/

    VMware will apply its core skill – taming ill-defined pools of computing resource – to multiple clouds, in a new effort called Cross-Cloud Architecture.

    CEO Pat Gelsinger will shortly take to the stage at VMworld 2016 and explain that clouds have re-created the problem that server virtualisation solved so effectively in the mid-to-late 2000s. VMware feels that cloud users often take the second-cheapest option by paying up-front at a flat monthly rate, but then don’t use all the resources they’ve paid for. Different clouds also become silos between which it is hard to move data or applications.

    VMware fixed very similar problems in its early days. Cross-Cloud will attempt to do so again, across public clouds. It’s a technology preview for now, but the intention is to make Cross-Cloud Software-as-a-Service and to touch as many clouds as possible.

    Users will be able to move workloads and data among clouds, with NSX making sure networks come along for the ride. Or feel like they’ve come along for the ride, anyway.

    VMware’s been evolving this stuff for a few years now, with vSphere becoming progressively less arcane. But clouds – and VMware rivals like Nutanix, Simplivity and Cisco – stopped being arcane years ago.

    Reply
  9. Tomi Engdahl says:

    Sweden’s Greta wants to disrupt the multi-billion dollar CDN market
    https://techcrunch.com/2016/08/30/greta/?ncid=rss&cps=gravity_1462_6298872280350688511

    Swedish startup Greta is on a somewhat quiet mission to disrupt the multi-billion dollar Content Delivery Network (CDN) market. The young company already boasts an impressive list of angel investors — including Jan Erik Solem (founder Polar Rose and Mapillary), Hampus Jakobsson (founder TAT and Brisk), and Jeremy Yap (recently awarded best angel investor at The Europas) — and now new VC BlueYard Capital has also become a backer.

    Launched late last year, Greta has developed tech that is able to calculate the most efficient route for site content, such as images and video, and deliver it via traditional server and CDN providers or Greta’s own peer-to-peer solution, based on whichever of the two will provide the best experience for end users.

    “The problem we’re solving is that it’s difficult for companies to provide their end users with sufficient site performance, meaning that companies are losing out on potential revenue as well as consumers having to suffer through buffering videos and wasting their time waiting for slow sites to load,” says Ottosson.

    “When Greta’s script is added to a site, the site’s traffic will be analyzed in real time, and within a few hours Greta will start suggesting site specific actions to improve your site performance, such as switching CDN in a specific region, or turning on Greta’s own peer-to-peer solution,” explains Ottosson.

    “Greta’s peer-to-peer solution is based on webRTC and enables peer-to-peer content delivery directly in the browser, meaning that performance issues such as video buffering and slow or crashing sites can be avoided, especially during heavy traffic. Greta will always optimize for providing the end users with the best user experience possible”.

    Meanwhile, the fact that Greta is able to switch to browser-based P2P content delivery, without requiring the end user to explicitly download and install any extra software (presuming their browser supports webRTC), means that site and media streaming improvement can happen in regions where there might not be close proximity to existing CDN networks, such as in Africa or the Middle East.

    Reply
  10. Tomi Engdahl says:

    Dina Bass / Bloomberg:
    Box and IBM partner to create Relay, an enterprise workflow management tool designed to automate complex tasks; a test version is coming in Q4

    Box Banks on New Product Built With IBM to Reach More Business Customers
    Software provides simpler, one-stop tool for collaborative projects.
    http://www.bloomberg.com/news/articles/2016-09-06/box-banks-on-new-product-built-with-ibm-to-reach-more-business-customers

    Box Inc., trying to expand revenue amid slower billings growth, will unveil new software developed with IBM to help companies set up and manage document-heavy workflows like recruiting, budgeting, sales and customer management.

    Called Box Relay, the product lets customers build processes and invite workers and outside partners to participate. They can review, edit, upload and reject needed documents and receive alerts to keep up to date on progress. It relies on workflow software from Box partner International Business Machines Corp.

    Reply
  11. Tomi Engdahl says:

    Betting Big
    http://smart-industry.net/amazon-aws-iot/

    Amazon, already a heavyweight in Cloud service providing, is joining the IoT game with Amazon Web Services IoT platform (AWS IoT). What is their strategy and how will they stack up against major competitors?

    The ‘Bezos Napkin Diagram’, as it’s also known, summarizes how Amazon conceives of its business and the role that new developments, like the IoT, play. It started with a foundational question: what do our customers really want? They felt that the answer was ‘choice’ and ‘selection’ from one source. That drives customer experience, which drives usage and traffic. Then, back in 2000, they paused…

    Those who recognize the value of being ecosystem enablers will be the winners

    By sharing their “platform” with others, Amazon could also reduce their total cost structure over time, which could enable them to reduce their prices, which would drive customer experience, leading to more traffic, more merchants, more selection and so on. Werner Vogels told the analysts that since Amazon implemented this plan, it had become two businesses: an online retailer and a platform business, of which the retail part of the business was a customer in the same way as the 3rd party merchants.
    Taking the concept further, Amazon realized that it could white-label its platform to other retailers and, more recently, make its advanced cloud platform (now called AWS) available to any enterprise.

    Amazon is not alone in deploying this model. Alibaba, Apple, Google, Microsoft and 170 other powerful “digital native” organizations operate under the same business model and collectively are now worth over $4 trillion.

    Companies in all sectors are looking to add digital services to their portfolios. In parallel, governments are looking to tackle the issues of urbanization, climate change, and lower productivity caused by aging populations. They need solutions to dramatically reduce costs and create more positive outcomes for their citizens in areas such as healthcare, traffic management, energy consumption, food security – to create “smart cities” which leverage data and the IoT.

    So this is the market that Amazon is preparing for. Amazon Dash, Dash Buttons and the always-listening Echo device, are examples of experiments that it is undertaking to understand how special-purpose IoT devices can support not only their retail business but also their platform business.

    AWS and its IoT elements provide the processing power to make their platform solutions and tools for enterprises more powerful: to enable lots of companies and product developers to design, build and operate IoTenabled services. The IoT device is the “tip of the iceberg” in creating an end-to-end solution. The IoT value chain also covers connectivity, big data, algorithms, and business processes. As more and more IoT Devices get introduced, a nore data is generated. These devices and services can take advantage of AWS’s infrastructure.

    Back to the flywheel: the more demand for its infrastructure, the lower Amazon’s costs which, in turn, makes it more attractive to companies. AWS’s IoT-enabling products include AWS Redshift, AWS Kinesis, AWS Machine Learning and, last year they acquired 2lemetry, a cloud-based application-enabler platform in order to provide M2M capability. These products support the growing number of companies and developers looking to build IoT-based services. They support an AWS-IoT flywheel, which is the real motivator for Amazon.

    Reply
  12. Tomi Engdahl says:

    Darrell Etherington / TechCrunch:
    Box teams up with Google to become a 3rd-party storage option for Google Docs, Sheets, and Slides; Box-stored content will be searchable via Google Springboard

    Box teams up with Google for Docs and Springboard integration
    https://techcrunch.com/2016/09/07/box-teams-up-with-google-for-docs-and-springboard-integration/

    Sure there’s some kind of fruit-related event going on right now, but this week is also BoxWorks, the annual conference for the enterprise content cloud platform provider. At that event, Box CEO Aaron Levie and Google’s SVP of Google’s cloud offerings Diane Greene are announcing a partnership that turns Box into a third-party storage option for Google Docs, Sheets and Slides, and that makes Box-stored content searchable via Google Springboard.

    The tie-up makes Box a storage option for housing Google documents, spreadsheets and slide presentations, letting users of Google’s cloud-based productivity suite work directly from their existing Box-based storage repositories. That’s a useful addition for sure, because while a number of enterprise organizations use Google’s offerings for collaboration and creation, a decent number of those actually use Box for cloud-base storage since it’s the solution that was actually designed specifically for business use.

    It may seem a little odd for Google to be collaborating with Box on cloud storage when Google has its own offering there

    Reply
  13. Tomi Engdahl says:

    Forrester Report: Consider Bare-Metal As A Viable Cloud Option
    IBM
    http://www.techrepublic.com/resource-library/whitepapers/forrester-report-consider-bare-metal-as-a-viable-cloud-option/?promo=2150&ftag=LGN22ef1e6&cval=right-rail

    A bare-metal cloud offering allows you to flexibly provision dedicated physical servers with cloud dynamics without the performance overhead of virtualization software. This report discusses bare-metal clouds, how they differ from conventional IaaS offerings, and how Infrastructure and Operations professional can benefit from them.

    Reply
  14. Tomi Engdahl says:

    What is bare-metal cloud?
    http://www.computerweekly.com/blog/CW-Developer-Network/What-is-bare-metal-cloud

    But what is bare metal cloud? Who uses it? What does it do? — and how should developers code for this environment?

    Organisations’ collective demands for flexibility, scalability and efficiency have driven them flocking to public cloud infrastructure services, representing (as they do) an opportunity for cutting IT costs while capitalising on technology innovations.

    But, just a few short years into the cloud revolution, new options have appeared

    Degradation situations

    Performance degradation can often occur, stemming from the introduction of a hypervisor layer. While the hypervisor enables the visibility, flexibility and management capabilities required to run multiple virtual machines on a single box, it also creates additional processing overhead.

    For application architectures that demand high levels of data throughput, the ‘noisy neighbour’ side-effect of the multi-tenant design of virtualised cloud environments can be constraining.

    Multi-tenant virtualised public cloud platforms result in virtual machines competing and restricting I/O for data-intensive workloads, leading to an inefficient and inconsistent performance.

    How is bare-metal cloud different?

    The bare-metal cloud provides a way to complement or substitute virtualised cloud services with a dedicated server environment that eliminates the overhead of virtualisation without sacrificing flexibility, scalability and efficiency.

    Bare-metal cloud servers do not run a hypervisor, are not virtualised — but can still be delivered via a cloud-like service model.

    This balances the scalability and automation of the virtualised cloud with the performance and speed of a dedicated server. The hardware is fully dedicated, including any additional storage. Bare-metal cloud instances can be provisioned and decommissioned via a web-based portal or API, providing access to high-performance dedicated servers on demand.

    Also, depending on the application and use case, a single bare-metal cloud server can often support larger workloads than multiple, similarly sized VMs.

    Which workloads see the most benefits?

    High-performance, bare-metal cloud functionality is ideal for instances where there is a need to perform short-term, data-intensive functions without any kind of latency or overhead delays, such as big data applications, media encoding or render farms.

    In the past, organisations couldn’t put these workloads into the cloud, without accepting lower performance levels. Organisations having to adhere to rigorous compliance guidelines are also good candidates for bare-metal cloud.

    An intrinsic benefit of the bare-metal cloud environment for developers is that no special considerations need to be made when coding for these servers

    Who are the leading bare metal cloud providers?
    https://www.quora.com/Who-are-the-leading-bare-metal-cloud-providers

    Reply
  15. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Microsoft’s Azure Service Fabric for running and managing microservices is coming to Linux
    https://techcrunch.com/2016/09/13/microsofts-azure-service-fabric-for-running-and-managing-microservices-is-coming-to-linux/

    Microsoft’s CTO for Azure (and occasional novelist) Mark Russinovich is extremely bullish about microservices. In his view, the vast majority of apps — including enterprise apps — will soon be built using microservices. Microsoft, with its variety of cloud services and developer tools, obviously wants a piece of that market. With Service Fabric, the company offers a service and tooling that makes it easier to run microservices-based applications. Until now, Service Fabric only supported Windows-based machines, but starting September 26, Microsoft will launch a Linux installer for Service Fabric as a public beta, too.

    As Russinovich explained to me, Microsoft itself has been using the microservices approach internally for seven years. It wasn’t until the cloud went mainstream, though, that this approach became something smaller companies could use, too. “The way we see it is that microservices and the cloud were really meant for each other,” he said. The cloud allows you to spin up machines instantaneously — and once you layer the idea of microservices on top, you provide a far greater degree of agility to developers than was possible before (using microservices, after all, you can update and work on separate components of an application without having to worry about the other parts of the apps).

    Reply
  16. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Oracle launches new cloud computing and storage offerings to compete with AWS, as Larry Ellison says “Amazon’s lead is over” at OpenWorld conference

    Larry Ellison says ‘Amazon’s lead is over’ as Oracle unveils new cloud infrastructure
    http://venturebeat.com/2016/09/18/larry-ellison-says-amazons-lead-is-over-as-oracle-unveils-new-cloud-infrastructure/

    Just like Larry Ellison said Oracle would, today at Oracle’s OpenWorld conference in San Francisco, the company unveiled its second generation of cloud infrastructure for third-party developers to run their applications in Oracle data centers.

    One particular instance, or virtual-machine (VM) type, that Oracle is making available in this second-generation offering — the Dense IO Shape — offers 28.8TB, 512GB, and 36 cores, at a price of $5.40 per hour. That product offers more than 10 times the input-output capacity than Amazon Web Services (AWS), specifically the i2.8xlarge instance, said Ellison, Oracle’s former chief executive and current executive chairman and chief technology officer.

    “Amazon’s lead is over. Amazon’s going to have serious competition going forward,” Ellison said. The company will be promoting its refreshed cloud infrastructure through the rest of its current fiscal year, which ends in May 2017, and during the next one, Ellison said.

    Ingrid Lunden / TechCrunch:
    Oracle buys Palerra, a cloud security startup co-founded by Oracle alums Rohit Gupta and Ganesh Kirti, which had raised $25M

    Oracle buys Palerra to boost its security stack
    https://techcrunch.com/2016/09/18/oracle-buys-palerra-to-boost-its-security-stack/

    Oracle is kicking off a big customer confab in San Francisco this week, and to mark the event, it’s announced an acquisition. Oracle is buying Palerra, a cloud security startup co-founded by Oracle alums Rohit Gupta (its CEO) and Ganesh Kirti (CTO).

    Terms of the deal were not disclosed but we will try to find out. Palerra was founded in 2013 (originally called Apprity) and raised $25 million with investors including Norwest Venture Partners and August Capital.

    Reply
  17. Tomi Engdahl says:

    Salesforce forms research group, launches Einstein A.I. platform that works with Sales Cloud, Marketing Cloud
    http://venturebeat.com/2016/09/18/salesforce-forms-research-group-launches-einstein-a-i-platform-that-works-with-sales-cloud-marketing-cloud/

    Salesforce is announcing today the launch of its Einstein artificial intelligence (A.I.) platform that’s implemented into several of the company’s existing cloud services: Sales Cloud, Service Cloud, Marketing Cloud, Analytics Cloud, App Cloud, Commerce Cloud, Community Cloud, and IoT Cloud.

    The company is also announcing the formation of Salesforce Research, a unit that will do research in deep learning, natural language processing, and computer vision that can be used to improve Salesforce products. The unit is led by Salesforce chief scientist Richard Socher, formerly cofounder and chief executive of A.I. startup MetaMind, which Salesforce acquired earlier this year. In a press briefing in San Francisco this week, Socher declined to say how many people were part of the team, although he did say that some of Salesforce’s 175 data scientists have joined the newly organized division.

    Reply
  18. Tomi Engdahl says:

    Teradici’s releases desktop-as-a-service-ware, as used by AWS and VMware
    Your own DaaS-aster zone is now remotely possible
    http://www.theregister.co.uk/2016/09/20/teradicis_releases_desktopasaserviceware_as_used_by_aws_and_vmware/

    Teradici has taken the code powering desktop-as-a-service (DaaS) offerings from VMware and Amazon Web Services and turned it in products you, yes you, can run.

    The “Cloud Access Software” and “Cloud Access Platform” let users run up DaaS rigs. The first product migrates apps to the cloud. The second tool delivers them from the cloud to the device of your choice.

    Users need not provide a complete desktop experience as the tools make it possible to wrap a custom GUI around an environment so that users can only see the apps managers deem fit for cloudy consumption.

    Teradici CEO Dan Cordingley told The Register he hopes that users will see the products as a new way to offer remote access to graphics-rich applications. As is often the case in the DaaS world, graphics-rich applications are the target for two reasons. Firstly, workstations are expensive to acquire, so using a shared and/or virtualised GPU can save some money. Secondly, lots of graphics-heavy application users work in nasty places – mines, oil rigs, hipster architecture offices where everyone streams Spotify – where bandwidth is at a premium. Because the two products use Teradici’s PCOIP protocol, which takes a desktop and turns it into a stream of encrypted pixels, Cordingley says DaaS can be more comfortably consumed than other means of remote access.

    Reply
  19. Tomi Engdahl says:

    Two weeks ago, the world’s largest privately-managed company was formed when EMC merged with Dell.

    a $ 63 billion trade as one of the world’s largest and by far the largest acquisition by a private company. The new net sales of $ 74 billion in Dell Technologies has 140 thousand employees

    - It may be that many image Dell is distorted. Perhaps Dell is still seen as a PC manufacturer, even if we have long been a big factor in cloud computing and data center equipment supplier. In this sense, the change is not as dramatic, says Dell CEO Mika Frankenberg.

    the biggest change in the IT side is probably the fact that few people buy more plain hardware. – All automated and purchased as a service. In Finland is one of Europe’s leading market, Enberg says.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=5083:uusi-dell-on-valmis-datavyoryyn&catid=13&Itemid=101

    Reply
  20. Tomi Engdahl says:

    TechCrunch:
    Google Apps for Work rebranded as “G Suite”, to get several machine learning-powered features; “Team Drive” brings team-level content management to Google Drive — Google announced today that its now ten-year old service Google Apps for Work (formerly Google Apps for Your Domain) …

    Google rebrands its business apps as G Suite, upgrades apps & announces Team Drive
    https://techcrunch.com/2016/09/29/google-rebrands-its-business-apps-as-g-suite-launches-team-drive-upgrades-apps/

    Reply
  21. Tomi Engdahl says:

    Brian Stevens / Google Cloud Platform Blog:
    Google Cloud Platform says eight new Cloud Regions launching in 2017: Mumbai, Singapore, Sydney, Northern Virginia, São Paulo, London, Finland, and Frankfurt — As we officially move into the Google Cloud era, Google Cloud Platform (GCP) continues to bring new capabilities to more regions …

    Google Cloud Platform sets a course for new horizon
    http://cloudplatform.googleblog.com/2016/09/Google-Cloud-Platform-sets-a-course-for-new-horizons.html

    Not only do applications running on GCP benefit from state-of-the-art infrastructure, but they also run on the latest and greatest compute platforms. Kubernetes, the open source container management system that we developed and open-sourced, reached version 1.4 earlier this week, and we’re actively updating Google Container Engine (GKE) to this new version. GKE customers will be the first to benefit from the latest Kubernetes features, including the ability to monitor cluster add-ons, one-click cluster spin-up, improved security, integration with Cluster Federation and support for the new Google Container-VM image (GCI).

    Reply
  22. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Google combines all of its cloud services under the ‘Google Cloud’ brand — Google for Work, Google’s Cloud Platform and the rest of the company’s cloud-based services are getting a new name. They have now been combined under the “Google Cloud” moniker. Google’s Diane Greene made the announcement …

    Google combines all of its cloud services under the ‘Google Cloud’ brand
    https://techcrunch.com/2016/09/29/google-combines-all-of-its-cloud-services-under-the-google-cloud-brand/

    Google for Work, Google’s Cloud Platform and the rest of the company’s cloud-based services are getting a new name. They have now been combined under the “Google Cloud” moniker. Google’s Diane Greene made the announcement at a small invite-only event in San Francisco.

    And just to confuse us, Google is also rebranding Google Apps for Work, which also falls under the Google Cloud umbrella. Google Apps for Work is now the G Suite

    The Google for Work/Google Cloud moniker encompassed a broad range of product, including core productivity apps like Gmail, Google Docs, Sheets and Slides, as well as niche offerings like Google Maps for Work and the Google Search for Work appliance. But it also includes the Google Cloud Platform cloud computing platform, Chromebooks, and Google’s enterprise mobility services.

    Reply
  23. Tomi Engdahl says:

    Ingrid Lunden / TechCrunch:
    Microsoft to build its first Azure data center in France this year, as part of a $3B investment to build its cloud services in EU

    Microsoft expands Azure data centers to France, launches trust offensive vs AWS, Google
    https://techcrunch.com/2016/10/03/microsoft-expands-azure-datacenters-to-france-looks-to-beat-aws-on-image-of-trust/

    Companies like Microsoft, Amazon and Google continue to compete fiercely in the area of cloud services for consumers, developers and enterprises, and today Microsoft made its latest moves to lay out its bid to lead the race, while also launching a new mission to position itself as the cloud provider that you can trust.

    Microsoft announced it would build its first Azure data center in France this year, as part of a $3 billion investment that it has made to build its cloud services in Europe. At the same time, the company also launched a new publication, Cloud for Global Good, with no fewer than 78 public policy recommendations in 15 categories like data protection and accessibility issues.

    The new expansion, investment and “trust” initiative were revealed by Microsoft CEO Satya Nadella, who was speaking at an event in Dublin, Ireland. He said that the expansion would mean that Microsoft covers “more regions than any other cloud provider… In the last year the capacity has more than doubled.”

    Reply
  24. Tomi Engdahl says:

    Burger barn put cloud on IT menu, burned out its developers
    Move to the cloud and you may need ‘vendor managers’ and more governance
    http://www.theregister.co.uk/2016/10/12/burger_barn_put_cloud_on_it_menu_burned_out_its_developers/

    tale of replacing bespoke business applications with bits of Oracle’s cloud.* Doing so has made for interesting news on his sub-20 IT team, especially for the team of five developers. They’re on the way out because with the old code gone, the old coders can go too.

    In their place, Nolte said he’ll hire “vendor managers,” people skilled in maintaining relationships with vendors, keeping contracts humming along nicely and negotiating for the new stuff that Hungry Jack’s needs. Nolte thinks some of his developers have the brains to make the jump to this new role, but not the proclivity. He characterised his developers as “fiddlers and tweakers” who are unlikely to abandon their coding careers.

    Another quick lesson in Australian institutions: the nation’s dominant auto club is the National Roads and Motorists Association (NRMA)
    Kotatko has just signed up for a marketing cloud and said one of the problems it has created is it’s too easy to run campaigns, because she and her team now have lots of data at their fingertips. She’s therefore been surprised at the amount of governance she has to do, lest marketers go wild with campaigns that target people from the wrong lists, breaching policy or good taste along the way.

    No, software-as-a-service won’t automatically simplify operations and cut costs
    Doing SaaS right needs at least half-a-dozen add-ons
    http://www.theregister.co.uk/2016/10/11/no_softwareasaservice_wont_automatically_simplify_operations_and_cut_costs/

    The Register has been asking around about what it takes to do SaaS right and has come to believe that among the tools you’ll probably need are:

    Backup, which may seem an odd item on a SaaS shopping list given vendors’ promises of super-redundant data centres that never go down.

    Data Loss Protection (DLP) Whether your data is on-premises or in a SaaS application, you need to make sure it can’t fall into the wrong hands. Most SaaS apps don’t have native DLP, the technology that monitors data to ensure sensitive material isn’t being e-mailed to unknown parties, saved onto removable storage media or otherwise exfiltrated. DLP’s become a standard issue on-premises security technology. It’s a no-brainer for SaaS users

    Context-aware security Imagine you work in London and that one afternoon, a few hours after you last logged in on a known good IP address, someone logs into your SaaS account from Eastern Europe with an unrecognised IP address.

    Cloud Access Service Brokers (CASBs) Now imagine you use multiple SaaS applications and that the context-sensitive logon and DLP policy described above needs to be implemented in all of them.

    Interconnect services Users hate even short delays when using software and that doesn’t change with SaaS. On your own networks, you can control the user experience. But SaaS nearly always has to traverse a big slab of the the public internet … unless you pay for interconnect services that the likes of Equinix and Digital Realty offer to pave a fast lane between you and your preferred SaaS applications

    Mobile device management A very good reason to adopt SaaS is that most applications are ready to roll on mobile devices from day one.

    Will SaaS vendors explain this stuff?

    Reply
  25. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    VMware and Amazon partner to bring VMware’s infrastructure software to AWS starting in 2017 — It’s been an open secret that Amazon’s AWS division and VMware were going to announce a partnership at a press conference in San Francisco later today. Thanks to VMware mistakenly posting its announcement early …

    VMware’s new cloud service will run on AWS
    https://techcrunch.com/2016/10/13/vmware-cloud-on-aws/

    It’s been an open secret that Amazon’s AWS division and VMware were going to announce a partnership at a press conference in San Francisco later today.

    In what is surely a play to get more enterprises to move to AWS over its competitors — and to protect VMware’s leadership around virtual machines, VMware and AWS are bringing VMware’s software-defined data center software to AWS under the ‘VMware Cloud on AWS‘ moniker.

    This means that all of VMware’s infrastructure software like vSphere, VSAN and NSX will soon run on AWS. The service is currently in its technology preview phase and an invite-only beta will start in early 2017 and the service will likely come out of beta in mid-2017.

    The service will be operated, sold and supported by VMware (not AWS) but integrate with the rest of AWS’ cloud portfolio (think storage, database, analytics and more).

    “Our customers continue to ask us to make it easier for them to run their existing data center investments alongside AWS,” wrote Andy Jassy, CEO, AWS, in today’s announcement. “Most enterprises are already virtualized using VMware, and now with VMware Cloud on AWS, for the first time, it will be easy for customers to operate a consistent and seamless hybrid IT environment using their existing VMware tools on AWS, and without having to purchase custom hardware, rewrite their applications, or modify their operating model.”

    Reply
  26. Tomi Engdahl says:

    AWS, VMWare announce strategic partnership, new hybrid cloud service
    http://www.zdnet.com/article/aws-vmware-go-from-rivals-to-partners/

    Once rivals, Amazon Web Services and VMWare announced a new service running VMware’s software-defined data center on the AWS cloud.

    Once on a clear collision course with one another, Amazon Web Services and VMWare on Thursday announced they’re forming a strategic partnership, starting with a new hybrid cloud service.

    Called VMWare Cloud on AWS, the service runs VMware’s enterprise class software-defined data center (SDDC) on the AWS cloud, allowing customers to run any application across public, private or hybrid cloud environments. VMWare’s vSphere, VSAN and NSX will all run on the AWS cloud, and the service will be optimized to run on dedicated, bare metal AWS infrastructure built specifically for the service.

    The new service offers “the best of both worlds, bringing together that dynamic flexibility combined with enterprise SDDC in a single solution,” VMWare CEO Pat Gelsinger said at a San Francisco event, alongside AWS CEO Andy Jassy.

    Before this partnership, Jassy said, many customers were left with a “binary decision” between using the VMWare software and infrastructure they already rely on or moving to AWS.

    Reply
  27. Tomi Engdahl says:

    Cisco president: One ‘hiccup’ and ‘boom’ – AWS is ‘gone’
    Interest rate rise going to kill public cloud kingpin? Sounds like wishful thinking
    http://www.theregister.co.uk/2016/10/14/aws_cisco_canalys/

    Cisco is the latest member of the technology old guard to take a pop at Amazon Web Services, claiming that the public cloud giant’s financials mean “one hiccup” and it could go bust.

    AWS-bashing is almost becoming an annual feature at Canalys Channels Forum; in 2014 the analyst estimated AWS was losing billions of dollars; and a year later it was all about the economics of the public cloud vendors mirroring a pyramid scheme.

    For the 2016 event in Barcelona, Edwin Paalvast, president of Cisco’s business in Europe, Middle East, Africa and Russia, was asked what he thought about customers relying on AWS.

    “AWS is a gamble,” he told an audience of resellers, distributors and press, “If you really look at their financials they leased everything out, if they keep growing like they are today they’ll win,” he said.

    Conversely, “if they have a hiccup they will be bankrupt”, the exec added.

    According to the most recent financials, AWS reported sales of $2.88bn for calendar Q2 ended 30 June, up 58 per cent year-on-year and operating income of $718m, up from $305m. It does reveal net profits or losses.

    The business has benefited from historically low interest rates that helped as it continued to build out data centre infrastructure. Canalys analysts previously claimed that should interest rates rise, AWS would find repaying those debts much harder.

    Paalvast said the “new world” of fast growing businesses like AWS was treated differently by Wall Street moneymen.

    “It is a very different world than we work in because we actually to have to show money, and most of the people in the room actually need to turn a profit, and that is not what Amazon’s business is built on.”

    Reply
  28. Tomi Engdahl says:

    Amazon AWS: ‘Hi there!’ VMware: ‘We submit. Please, save us’
    vSphere to be rented out on Jeff Bezos’ cloud
    http://www.theregister.co.uk/2016/10/13/amazon_vmware_unite/

    Amazon Web Services and VMware have agreed to work together to make VMware’s vSphere server virtualization software available on AWS infrastructure.

    At a media event held at the Ritz Carlton in San Francisco on Thursday, Andy Jassy, CEO of AWS, said that in recent years, enterprise customers have been confused about the nature of the hybrid cloud. They wondered whether they had to choose between running applications in their own data centers and running applications in the AWS cloud.

    Pat Gelsinger, CEO of VMware, described the partnership as “the best of both worlds.” He said, “This is the result of our customers telling us what they needed.”

    Reply
  29. Tomi Engdahl says:

    Mary Jo Foley / ZDNet:
    Microsoft to add single sign-on via Skype option for various Microsoft services, including Office, OneDrive, Xbox Live, and Outlook.com, starting this week

    Microsoft to add single sign-on via Skype option for various Microsoft services
    http://www.zdnet.com/article/microsoft-to-add-single-sign-on-via-skype-option-for-various-microsoft-services/

    Microsoft will allow people to use their Skype names to sign into Office, OneDrive, Outlook.com and other Microsoft services starting next week.

    Microsoft is adding an option that will allow people to use their Skype name to sign into other Microsoft services like Office, OneDrive, Outlook.com, and Xbox Live.

    Starting next week, users will have an option to use their Skype name as a single sign-in — in some cases along with an email address — to access these services, officials said on Oct. 18.

    I asked Microsoft what the coming changes mean to those who already have Microsoft Accounts. A spokesperson sent me the following:

    “Starting next week when this capability goes live, if you already have a Microsoft account, we recommend updating it with your Skype account. This lets you access Skype, Office, Xbox and other Microsoft services with a single account. After you update your Skype account to a Microsoft account, you can continue using your Skype Name, with your Microsoft account password to sign in, even for Skype. Please note that you can only update your Skype account to a Microsoft account once. “

    Reply
  30. Tomi Engdahl says:

    Make Any PC A Thousand Dollar Gaming Rig With Cloud Gaming
    http://hackaday.com/2016/10/19/make-any-pc-a-thousand-dollar-gaming-rig-with-cloud-gaming/

    The best gaming platform is a cloud server with a $4,000 dollar graphics card you can rent when you need it.

    [Larry] has done this sort of thing before with Amazon’s EC2, but recently Microsoft has been offering a beta access to some of NVIDIA’s Tesla M60 graphics cards. As long as you have a fairly beefy connection that can support 30 Mbps of streaming data, you can play just about any imaginable game at 60fps on the ultimate settings.

    Cloudy Gamer: Playing Overwatch on Azure’s new monster GPU instances
    http://lg.io/2016/10/12/cloudy-gamer-playing-overwatch-on-azures-new-monster-gpu-instances.html

    It’s no secret that I love the concept of not just streaming AAA game titles from the cloud, but playing them live from any computer – especially on the underpowered laptops I usually use for work. I’ve done it before using Amazon’s EC2 (and written a full article for how to do it), but this time, things are a little different. Microsoft’s Azure is first to give access to NVIDIA’s new M60 GPUs, completely new beasts that really set a whole new bar for framerate and image quality. They’re based on the newer Maxwell architecture, versus the Kepler cards we’ve used in the past. Hopefully one day we’ll get the fancy new Pascal cards :)

    Basically it’ll come down to this: we’re going to launch an Azure GPU instance, configure it for ultra-low latency streaming, and actually properly play Overwatch, a first-person shooter, from a server over a thousand miles away!

    And yes, it seems I always need to repeat myself when writing these articles: the latency is just fine, the resolution is amazing, it can be very cost-effective (if you don’t forget the machine on), and all very practical for those of you obsessed about minimalism (like me).

    Costs

    Note that this is NV6 beta pricing – it may change when it becomes generally available. I’ll try to update the article then. Either way, remember, there’s $0 upfront cost here. This contrasts dramatically to the thousands of dollars you’d end up paying for a similarly spec-ed gaming rig.

    NV6 Server: $0.73/hr
    Bandwidth at 10MBit/s: $0.41/hr
    HD storage: $0.003/hr

    Total at 10MBit/s: $1.14/hr
    Total at 30Mbit/s: $1.96/hr (recommended tho)

    Azure GPU machines are still in Preview

    Reply
  31. Tomi Engdahl says:

    IBM kills off SoftLayer brand, puts it in the Bluemix
    ♬ ‘Cause tonight is the night when two become one ♬
    http://www.theregister.co.uk/2016/10/25/ibm_kills_off_softlayer_brand_puts_it_in_the_bluemix/

    IBM has started sublimating the SoftLayer brand and will henceforth put its own Bluemix brand front and centre.

    Or as Big Blue puts it “the Bluemix moniker now encompasses Bluemix services and SoftLayer offerings like bare metal servers.”

    If you’re a SoftLayer customer, nothing changes.

    IBM’s also added the the ability to sign on to a Bluemix account with an IBM ID.

    Over time all things cloudy and IBM will appear at ibm.com/bluemix and the SoftLayer brand will go away.

    IBM acquired SoftLayer for around US$2bn in 2013. It’s since made the “Watson” analytics service the main feature of an as-a-service push that also sees it offer Cloud Foundry and numerous application templates.

    That’s a powerful proposition that positions IBM handily against AWS, Google and Azure .

    Reply
  32. Tomi Engdahl says:

    The cloud is not new. What we are doing with it is
    Envisioning provisioning
    http://www.theregister.co.uk/2016/10/25/the_cloud_is_not_new_what_we_are_doing_with_it_is/

    Sysadmin blog In the 10 years since the modern form of public cloud computing went mainstream, it has changed the entire industry’s approach to IT. In response, IT’s top vendors have had to change as well. Like any technology, however, the public cloud has adapted, evolved, and become something much different than was ever originally envisioned.

    The public cloud is now a part of the fabric the IT universe; inseparable from reality. This doesn’t prevent people from denying reality – humanity seems pretty good at that, all things considered – but the combination of near-instant provisioning, self service, scriptability and ease of use is the new normal for IT.

    What’s worth noting is that public cloud adoption wasn’t driven by technology. In many ways public cloud offerings are shockingly inferior to those that can be delivered by on-premises teams.

    Public cloud adoption was driven by human factors. In turn, customer demands transformed the public cloud from what was planned into what it actually became.

    Reply
  33. Tomi Engdahl says:

    Akamai rides on the botnet’s back to US$584 million quarter
    Security biz up, content distribution down
    http://www.theregister.co.uk/2016/10/26/akamai_q3_2016_results/

    Cloud computing security has driven a 6 per cent year-on-year revenue growth for Akamai, up from $US551 million last year to $584 million for Q3 2016.

    The company’s third quarter financial report shows its performance and security business unit turned in $345 million in revenue, 19 per cent higher than for the same quarter in 2015.

    Its cloud security unit shot up 46 per cent year-on-year, from $65 million in Q3 2015 to $95 million.

    Not everybody hate botnets, it seems: CEO Dr Tom Leighton said fighting off DDoS attacks like those that hammered Dyn is “an area where Akamai’s unique architecture and ongoing investments in global scale and security innovation continue to make a critical difference”.

    Reply
  34. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Amazon Web Services revenue increases 55% YoY to $3.23B as operating profit reaches $861M — Ecommerce company Amazon.com today said that its Amazon Web Services (AWS) public cloud computing infrastructure division generated $3.23 billion in revenue in the third quarter of the year.

    AWS reports $3.2 billion in revenue in Q3 2016, up 55% over last year
    http://venturebeat.com/2016/10/27/aws-reports-3-2-billion-in-revenue-in-q3-2016-up-55-over-last-year/

    Ecommerce company Amazon.com today said that its Amazon Web Services (AWS) public cloud computing infrastructure division generated $3.23 billion in revenue in the third quarter of the year. That means revenue was up 54.9 percent year over year.

    AWS produced $861 million in operating income for the quarter, according to today’s earnings statement. The business unit had $2.21 billion in operating expenses.

    Reply
  35. Tomi Engdahl says:

    Steve Lohr / New York Times:
    IBM execs say years of huge investments in Watson, which employs 10K people, are yielding profitable opportunities in markets like healthcare and manufacturing

    IBM Is Counting on Its Bet on Watson, and Paying Big Money for It
    http://www.nytimes.com/2016/10/17/technology/ibm-is-counting-on-its-bet-on-watson-and-paying-big-money-for-it.html?_r=0

    Reply
  36. Tomi Engdahl says:

    Whitehurst says “open” isn’t just about software.

    “We know that bureaucracies, hierarchies, are really good at driving efficiency,” he says. “They’re not good at innovating.”

    Reply
  37. Tomi Engdahl says:

    Red Hat CEO: Linux Is Now The ‘Default Choice’ For The Cloud
    https://linux.slashdot.org/story/16/10/30/0046248/red-hat-ceo-linux-is-now-the-default-choice-for-the-cloud

    Speaking at the “All Things Open” conference, Red Hat CEO Jim Whitehurst remembered when Linux “was just a ‘bunch of geeks’ getting together figuring it all out on an 8286 chip” 25 years ago.

    “It went from being kind of a hacker movement to truly what I’ll say [is] a viable alternative to traditional software,” Whitehurst says, adding that Red Hat was a part of that push. Over the years, it came out from under the radar, being what Whitehurst calls “the default choice for a next-generation of infrastructure,”

    Red Hat CEO: ‘All things open’ isn’t just about technology
    http://www.bizjournals.com/triangle/news/2016/10/26/red-hat-ceo-all-things-open-isnt-just-about.html

    Jim Whitehurst, CEO of Raleigh-based open-source technology firm Red Hat (NYSE: RHT), says the world – not just technology companies – are shifting toward “open.”

    “It went from being kind of a hacker movement to truly what I’ll say a viable alternative to traditional software,” Whitehurst says, adding that Red Hat was a part at of that push. Over the years, it came out from under the radar, being what Whitehurst calls “the default choice for a next-generation of infrastructure,” particularly when it comes to cloud architectures. “Companies are competing around communities.”

    He points to Google, Microsoft and Facebook, all having open sourced their machine learning systems.

    “They recognize the company that builds the community around that piece of technology, that technology is going to win,” Whitehurst says. “I think it shows that there’s a growing recognition that the best way to innovate in these very fundamental areas is to do it in the open.”

    Whitehurst says “open” isn’t just about software.

    “We know that bureaucracies, hierarchies, are really good at driving efficiency,” he says. “They’re not good at innovating.”

    Reply
  38. Tomi Engdahl says:

    Dan Richman / GeekWire:
    Report: AWS has 45% share of worldwide public Infrastructure as a Service market, more than Microsoft, Google, and IBM combined — Amazon Web Services holds a 45 percent share of the worldwide public market for Infrastructure as a service (IaaS) — greater than Microsoft …

    Study: AWS has 45% share of public cloud infrastructure market — more than Microsoft, Google, IBM combined
    http://www.geekwire.com/2016/study-aws-45-share-public-cloud-infrastructure-market-microsoft-google-ibm-combined/

    Amazon Web Services holds a 45 percent share of the worldwide public market for Infrastructure as a service (IaaS) — greater than Microsoft, Google and IBM’s shares combined, according to a quarterly analysis by Synergy Research Group.

    AWS also leads in the platform-as-a-service (PaaS) market, though by a narrower margin, Synergy found. Only in the much smaller realm of the managed private cloud does AWS yield to market-leader IBM.

    Microsoft and Google are each growing their cloud revenue at over 100 percent per year. Still, AWS remains twice the size of those companies, plus IBM, when it comes to IaaS, Synergy estimated.

    AWS posted $3.2 billion in revenue last quarter

    Reply
  39. Tomi Engdahl says:

    Microsoft to open two Azure data centers solely for US DoD
    http://www.cablinginstall.com/articles/pt/2016/10/microsoft-to-open-two-azure-data-centers-solely-for-us-dod.html?cmpid=enl_CIM_CablingInstallationMaintenanceDataCenterNewsletter_2016-11-01&eid=289644432&bid=1573930

    Whether it’s certifications that guarantee patient privacy for healthcare providers or security certifications for law enforcement agencies, Microsoft’s Azure and Azure Government are getting more and more government contracts.

    Reply
  40. Tomi Engdahl says:

    Cisco: This $200k UCS S-Series is cheaper than AWS S3 after 13 months
    Allegedly
    http://www.theregister.co.uk/2016/11/02/cisco_builds_storage_server_cheaper_to_own_than_amazon_storage/

    Cisco has designed a storage server that it claims is 56 per cent cheaper over three years than paying out for Amazon’s S3 service. The networking giant also reckons it’s the first fully modular server architecture in the industry.

    The S-Series is designed for data intensive workloads such as big data, streaming media and collaboration applications, and for deploying software-defined storage, object storage, and data protection solutions. Cisco says the boxes will try to access and analyze data quickly to generate results in real time, with unstructured data coming from sources such as the Internet of Things, video, mobility, and collaboration.

    Applications processing the data could be recommendation engines, video analytics, diagnostic imaging, streaming analytics, and machine learning. The concept is to analyze the data as close to its arrival as possible, and before it gets punted off to back-end storage.

    Reply
  41. Tomi Engdahl says:

    Windows Server-as-a-service: Microsoft lays out Server 2016′s future
    Want to use Nano Server? Software Assurance is non-optional
    http://www.theregister.co.uk/2016/07/12/microsofts_windows_server_2016_release_plans/

    Microsoft has released details of how Windows Server 2016 will be released and maintained, and as with Windows 10 it includes a “Windows as a service” model of frequent operating system updates.

    Windows Server 2016 will be launched at the company’s Ignite conference, which runs from September 26 to 30 in Atlanta, Georgia. As with the current release, there will be three editions: Datacenter, Standard and Essentials.

    Several things are new, though. One is that Windows Server 2016 will be priced and licensed per core, rather than per physical processor.

    Another is that the Datacenter and Standard editions have a new installation option called Nano Server, which is a stripped-down version designed for lightweight virtual machines, or a low-overhead host for virtual machines. Nano Server has no GUI and can only be managed remotely.

    There are also changes to the way Windows Server is serviced. Datacenter and Standard can be installed either as Long-Term Servicing Branch (LTSB) – with five years of mainstream support and five years of extended support – or as Current Branch for Business (CBB), in which case you can expect feature updates two or three times a year. These terms are familiar from Microsoft’s Windows 10 release, which follows a similar pattern.

    Reply
  42. Tomi Engdahl says:

    IBM Bluemix to offer Intel 3D XPoint-powered cloud in late 2017
    First comes a cloud testbed so we can figure out what non-volatile memory is good for
    http://www.theregister.co.uk/2016/11/04/ibm_bluemix_to_offer_3d_xpointpowered_cloud_in_late_2017/

    IBM has quietly revealed that in “In the second half of 2017” its Bluemix cloud will offer “a broad services suite fuelled by Intel Optane”.’

    “Optane” is Intel’s official name for 3D Xpoint, its non-volatile memory that’s faster than NAND Flash, persistent and therefore a very interesting alternative to both random access memory and mass storage media. Intel has said it will derive revenue from Optane sales this year, but shipments aren’t expected to flow until early 2017.

    Reply
  43. Tomi Engdahl says:

    AWS: We’re gonna make mobile apps great again with Lambda functions
    The best apps will come from America, using Amazon, which will be paying all that tax
    http://www.theregister.co.uk/2016/11/10/aws_mobile_apps_serverless/

    Amazon Web Services is rolling out new Mobile Hub features aimed at simplifying the development of secure mobile apps.

    The cloud giant says that its Cloud Logic feature will now let developers create Lambda functions specifically for mobile apps and integrate them with AWS’s API Gateways. This, Amazon claims, should allow for serverless mobile apps to be easily created and tested. Obviously, there will be a server or two involved in the backend but mobile app devs don’t have to worry about setting one up and running it – their code just talks to the API gateway to perform cloud-based processing.

    “With Mobile Hub, you don’t have to be an AWS expert to begin using its powerful backend features in your app,” blogged Amazon’s Vyom Nagrani.

    “Mobile Hub then provisions and configures the necessary AWS services on your behalf and creates a working quickstart app for you.”

    This speeds up the process of developing mobile apps that make use of both serverless functions and APIs, in theory.

    Additionally, AWS says it is adding support for an email and password login system on mobile apps with the Cognito account management tool as well as integration with SAML login supporters.

    When used together, Amazon believes that the Cloud Logic, email and password login, and SAML support will allow developers to add support for secure mobile logins to their cloud apps – either those hosted in public cloud or a virtual private cloud – with the ability to choose what sign-in method (such as Google or Facebook login) will be offered to end users.

    Reply
  44. Tomi Engdahl says:

    Google BigQuery TITSUP caused by failure to scale-yer workloads
    Engineers went head down, bum up but zipped lips gave users the … heebie jeebies
    http://www.theregister.co.uk/2016/11/14/google_bigquery_outage/

    A four-hour outage of Google’s BigQuery enterprise data warehouse has taught the cloud aspirant two harsh lessons: its cloud doesn’t always scale as well as it would like, and; it needs to explain itself better during outages.

    The Alphabet subsidiary’s trouble started last Tuesday when a surge in demand for the BigQuery authorization service “caused a surge in requests … exceeding their current capacity.” Or as we like to say here at El Reg, it experienced a Total Inability To Support Usual Performance and went TITSUP.

    As Google explains, “The BigQuery streaming service requires authorization checks to verify that it is streaming data from an authorized entity to a table that entity has permissions to access.” There’s a cache between the authorization service and its backend, but because “Because BigQuery does not cache failed authorization attempts, this overload meant that new streaming requests would require re-authorization, thereby further increasing load on the authorization backend.”

    As authorization requests piled up, the strain on the already-stressed authorization backend meant “continued and sustained authorization failures which propagated into streaming request and query failures.”

    Google’s now figured out that its cache wasn’t big enough and that the authorization backend lacked capacity

    Reply
  45. Tomi Engdahl says:

    GitLab to dump cloud for its own bare metal Ceph boxen
    ‘In the long run, it will be more efficient, consistent, and reliable’. And it’s cheaper, too
    http://www.theregister.co.uk/2016/11/14/gitlab_to_dump_cloud_for_its_own_bare_metal_ceph_boxen/

    Git repository manager and developer playground GitLab has decided it is time to quit the cloud, joining Dropbox in concluding that at a certain scale the cloud just can’t do the job.

    GitLab came to the decision after moving to the Ceph Filesystem, the new-ish filesystem that uses a cluster running the Ceph objects-and-blocks-and-files storage platform.

    As GitLab’s infrastructure lead Pablo Carranza explains, Ceph FS “needs to have a really performant underlaying infrastructure because it needs to read and write a lot of things really fast.”

    Dropbox slips 500PB into its Magic Pocket, not spread over AWS
    Shifts 90% of your files from Amazon to in-house systems
    http://www.theregister.co.uk/2016/03/14/dropbox_moves_data_from_aws/

    Reply
  46. Tomi Engdahl says:

    Microsoft reveals Neandercloud / Cloud Sapiens co-existence and cross-breeding plan
    Azure Pack gets six more years of active development and support until 2027
    http://www.theregister.co.uk/2016/11/15/azure_pack_lifecycle_extension/

    Microsoft’s revealed that it will keep working on its first Azure-in-a-box product, Azure Pack, until 2022 and support it until 2027.

    Azure Pack offers a cloud-like experience thanks to its use of the first-generation Azure GUI, but under the hood is Windows Server and System Center. The Pack can run on just about any x86 and storage you fancy.

    The forthcoming Azure Stack, by contrast, will only work on specified and locked down hardware, will replicate the current Azure experience on-premises and is designed for users keen to hop aboard hyperconverged and hybrid cloud bandwagons.

    Back to the Pack, which can now run Windows Server 2016 and will, Microsoft promises, “continue to evolve until 2022 and will be supported until 2027.” Redmond’s not saying how much evolution is left in the platform, but does say it should be fit for use by service providers offering infrastructure-as-a-service.

    Reply
  47. Tomi Engdahl says:

    Packet.net strong-ARMs cloud for $0.005 per core per hour
    96-core servers packing 2 Cavium ThunderX CPUs yours for the crunching
    http://www.theregister.co.uk/2016/11/15/packet_dotnet_arm_cloud/

    Packet.net, a bare-metal cloud aimed at developers, has flicked the switch on cloud-running servers powered by a pair of Cavium’s 48-core ARMv8-A ThunderX processors.

    CEO Zachary Smith told The Register that the company’s cooked up the cloud for a few reasons. Price is one: Packet will offer ARM cores at a tenth of the price it charges for Intel cores, at US$0.50 per hour per server, or $0.005 per core per hour. Smith thinks that will be a head-turner by itself.

    He also thinks developers will appreciate the chance to try native Docker on many-cored machines and appreciate the opportunity an ARM-powered cloud represents as they pursue 100 per cent portable software. He believes open source folk will see the arrival of an ARM-powered cloud as incentive to accelerate cross-platform versions of their pet projects.

    Even ARM will benefit, he says, because having a working cloud on the market will give both it and licensees more reason to innovate for the data centre.

    ARM’s recent purchaser, SoftBank, recently tipped some money into Packet.net, but Smith swears he’s had a long-term ambition to offer an ARM-powered cloud, if only because he enjoys having multiple ARM server CPU vendors willing to do deals. That kind of competition is not currently possible in the x86 world, at least until AMD returns to servers in 2017.

    Reply
  48. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Google announces new AI group, to be headed by former head of Stanford’s Artificial Intelligence Lab Fei-Fei Li and former head of research at Snapchat, Jia Li

    Google Cloud is launching GPU-backed VM instances early in 2017
    http://venturebeat.com/2016/11/15/google-cloud-is-launching-gpu-backed-vm-instances-early-in-2017/

    Competitors Amazon Web Services (AWS), IBM SoftLayer, and Microsoft Azure have launched GPU-backed instances in the past. Google is looking to stand out by virtue of its per-minute billing, rather than per-hour, and its variety of GPUs available: the Nvidia Tesla P100 and Tesla K80 and the AMD FirePro S9300 x2.

    This cloud infrastructure can be used for a type of artificial intelligence (AI) called deep learning. It’s in addition to Google’s custom-made tensor processing units (TPUs), which will be powering Google’s Cloud Vision application programming interface (API). The joint availability of GPUs and TPUs should send a signal that Google doesn’t see TPUs as being a one-to-one alternative to GPUs.

    Also today Google announced the formation of a new Cloud Machine Learning group. Google cloud chief Diane Greene named the two leaders of the group: Jia Li, the former head of research at Snapchat, and Fei-Fei Li, the former head of Stanford’s Artificial Intelligence Lab and also the person behind the ImageNet image recognition data set and competition. As Greene pointed out, both of the leaders are women, and also respected figures in the artificial intelligence field.

    Reply
  49. Tomi Engdahl says:

    Google Cloud will finally add GPU services in early 2017
    http://www.geekwire.com/2016/google-cloud-will-belatedly-add-gpu-services-early-2017/

    Surprisingly late, Google Cloud will add GPUs (graphics processing units) as a service early next year, according to a blog post today. Amazon Web Services, Microsoft Azure and IBM’s Bluemix all already offer GPU as a service.

    Google may be seeking to distinguish itself, however, with the variety of GPUs it’s offering. They include the AMD FirePro S9300 x2 and two offerings from NVIDIA Tesla: the P100 and the K80. And Google will charge by the minute, not by the hour, making GPU usage more affordable for customers needing it only for short periods.

    Reply
  50. Tomi Engdahl says:

    Oracle acquires DNS provider Dyn, subject of a massive DDoS attack in October
    https://techcrunch.com/2016/11/21/oracle-acquires-dns-provider-dyn-subject-of-a-massive-ddos-attack-in-october/

    A timely, if a little surprising, piece of M&A this morning from Oracle: the enterprise services company announced that it has acquired Dyn, the popular DNS provider that was the subject of a massive distributed denial of service attack in October that crippled some of the world’s biggest and most popular websites.

    Oracle plans to add Dyn’s DNS solution to its bigger cloud computing platform, which already sells/provides a variety of Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) products and competes against companies like Amazon’s AWS.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*