Who's who of cloud market

Seemingly every tech vendor seems to have a cloud strategy, with new products and services dubbed “cloud” coming out every week. But who are the real market leaders in this business? Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article shows Gartner’s Magic Quadrant for IaaS. Research firm Gartner’s answer lies in its Magic Quadrant report for the infrastructure as a service (IaaS) market.

It is interesting that missing from this quadrant figure are big-name companies that have invested a lot in the cloud, including Microsoft, HP, IBM and Google. The reason is that report only includes providers that had IaaS clouds in general availability as of June 2012 (Microsoft, HP and Google had clouds in beta at the time).

Gartner reinforces what many in the cloud industry believe: Amazon Web Services is the 800-pound gorilla. Gartner has also found one big minus on Amazon Web Services: AWS has a “weak, narrowly defined” service-level agreement (SLA), which requires customers to spread workloads across multiple availability zones. AWS was not the only provider where there was something negative to say on the service-level agreement (SLA) details.

Read the whole Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article to see the Gartner’s view on clould market today.

1,065 Comments

  1. Tomi Engdahl says:

    Oracle and Xamarin flutter eyelashes at suits with native app deal
    Come hither, big boys, and C# what we’ve got for you
    http://www.theregister.co.uk/2015/07/09/oracle_xamarin_mobile_cloud/

    Oracle is tapping into the power of native apps and cloud delivery under a development deal with mobile app firm Xamarin.

    The pair unveiled the Xamarin SDK for Oracle Mobile Cloud Service on Thursday, to build apps for iOS, Android and Windows.

    Xamarin’s tools let you build native apps for the different mobile flavours using Microsoft’s C# language.

    The deal gives Oracle access to a massive pool a million devs in the Xamarin community working in C# and Java who are building enterprise and custom apps for different mobile platforms.

    For Xamarin the technology collaboration opens the door on more than 100,000 Oracle enterprise customers.

    Oracle, like all things cloud, is late to the party, with the Mobile Cloud announced at the company’s annual OpenWorld conference in late 2014.

    An agreement with Xamarin potentially gives Oracle’s mobile ambitions a potential fillip.

    The secret is Xamarin’s ability to let devs write native apps for different vendors’ mobile devices using the tools and languages they might already know without having to go native for each platform.

    Xamarin offers not only own development environment but also a plug-in to Microsoft’s Visual Studio, arguably the default business apps dev suite.

    Reply
  2. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Amazon Launches AWS Device Farm, Lets Developers Test Android And Fire OS Apps On Real Devices
    http://techcrunch.com/2015/07/09/amazon-launches-aws-device-farm-lets-developers-test-android-and-fire-os-apps-on-real-devices/

    Starting next week, Android and Fire OS developers will be able to use a new cloud-based service from Amazon to test their apps on physical smartphones and tablets. The AWS Device Farm will allow developers to upload their apps and test them on “the most commonly used mobile devices across a continually expanding fleet that includes the latest device/OS combination.” Amazon says. Sadly, the company didn’t say how many devices we are actually talking about.

    If this concept sounds familiar, it may be because a number of other companies offer very similar services already. Google announced Cloud Test Lab at its I/O developer conference a few weeks ago, for example (though it won’t launch until later this summer), and Xamarin has already been offering its Test Cloud service (with support for about 1,000 1,600 devices) since 2013.

    Reply
  3. Tomi Engdahl says:

    AWS Official Blog
    Amazon API Gateway – Build and Run Scalable Application Backends
    https://aws.amazon.com/blogs/aws/amazon-api-gateway-build-and-run-scalable-application-backends/?sc_campaign=launch&sc_category=api_gateway&sc_channel=SM&sc_content=summit_launch&sc_detail=std&sc_medium=aws&sc_publisher=tw_go&adbsc=social_launches_20150709_48897386&adbid=619168902879145984&adbpl=tw&adbpr=66780587

    I like to think of infrastructure as the part of a system that everyone needs and no one likes to work on! It is often undifferentiated & messy, tedious to work on, difficult to manage, critical to the success of whatever relies on it, and generally taken for granted (as long as it works as expected).

    Many of our customers host backend web services for their mobile, web, enterprise, or IoT (Internet of Things) applications on AWS. These services have no user interface. Instead, they are accessed programmatically, typically using a REST-style interface. In order to successfully host an application backend you need to think about the infrastructure: authorization, access control, traffic management, monitoring, analytics, and version management. None of these tasks are easy, and all count as infrastructure. In many cases you also need to build, maintain, and distribute SDKs (Software Development Kits) for one or more programming languages. Put it all together, and the amount of code and resources (not to mention head-scratching) devoted to the infrastructure for web services can dwarf the actual implementation of the service. Many of our customers have told us that they would like to make investments in web services, but have little interest in building or maintaining the infrastructure for them due to the cost and complexity involved.

    New API Gateway
    Today we are introducing the new Amazon API Gateway. This new pay-as-you-go service allows you to quickly and easily build and run application backends that are robust, and scalable. Instead of worrying about the infrastructure, you can focus on your services.

    The API Gateway makes it easy for you to connect all types of applications to API implementations that run on AWS Lambda, Amazon Elastic Compute Cloud (EC2), or a publicly addressable service hosted outside of AWS. If you use Lambda (I’ll show you how in just a moment), you can implement highly scalable APIs that are totally server-less.

    You can also implement APIs that wrap around, enhance, and effectively modernize legacy systems

    The API Gateway was designed to deliver on the following promises:

    Scalable & Efficient – Handle any number of requests per second (RPS) while making good use of system resources.
    Self-Service & Highly Usable – Allow you to define, revise, deploy, and monitor APIs with a couple of clicks, without requiring specialized knowledge or skills, including easy SDK generation.
    Reliable – Allow you to build services that are exceptionally dependable, with full control over error handling, including customized error responses.
    Secure – Allow you to take advantage of the latest AWS authorization mechanisms and IAM policies to manage your APIs and your AWS resources.
    Performant – Allow you to build services that are globally accessible (via CloudFront) for low latency access, with data transfer to the backend over the AWS network.
    Cost-Effective – Allow you to build services that are economical to run, with no fixed costs and pay-as-you-go pricing.

    Available Now
    The Amazon API Gateway is available today in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions and you can start using it today.

    The pricing model is simple. You pay for calls to the API and for outbound data transfer (the information returned by your APIs). Caching is priced separately, and the price is depending on the size of the cache that you configure.

    Reply
  4. Tomi Engdahl says:

    Microsoft tries to paint VMware azure with disaster recover detour
    Redmond’s cloudy DR can now handle Virtzilla-styled VMs
    http://www.theregister.co.uk/2015/07/10/microsoft_tries_to_paint_vmware_azure_with_disaster_recover_detour/

    Microsoft has just taken a swipe at VMware’s young cloud business.

    VMware markets that effort, vCloud Air, as the perfect cloud for VMware users, because it just looks like an extension of vSphere. Spinning up servers in the cloud, or shunting workloads around between servers, works just like doing those chores in your own bit barn. Virtzilla also offers a disaster recovery (DR) service in vCloud Air, because DR is such a blindingly obvious application of the cloud but also because the end-to-end vSphere story works well there too: if you’re going to fail over you might as well fail over into the same environment you already operate.

    Microsoft makes pretty much the same arguments when chatting to Windows users about its Azure Site Recovery (ASR) service, which shunts on-premises VMs into Azure, keeps them in sync and allows failover to cloudy operations if your bit barn borks. But now it’s making the same pitch to VMware users by making it possible for Virtzilla-styled VMs to sync to, and run inside, Azure.

    Reply
  5. Tomi Engdahl says:

    AWS opens gate to fondleslabs-as-a-service farm
    Sorry, devs, you just lost a reason to buy one of every phone and tablet you fancy
    http://www.theregister.co.uk/2015/07/10/aws_opens_gate_to_fondleslabsasaservice_farm/

    Amazon Web Services (AWS) has opened a farm in which it hopes developer’s will loose their code to graze on lush fields of myriad devices.

    Enough with the rural metaphor: the “Device Farm” is a smartphones-and-tablets-as-a-service offering. The idea is that developers today have to run emulators galore, or buy an awful lot of gadgets, in order to test mobile apps. AWS is kindly offering to make that easier – for US$0.17 per “device minute” or $250 per device per month – by letting developers upload code to devices it operates. The service includes testing tools like Appium, Calabash, and Espresso and spits out reports once they’ve run.

    AWS hasn’t said how many devices it has, but has revealed its own Fire devices are in the farm. Apple’s aren’t: this is an Android-only farm for now.

    The farm opens on July 13th. When it does, there’s 250 free device minutes on offer to all comers

    Reply
  6. Tomi Engdahl says:

    Build An Amazon EC2 Gaming Rig
    http://hackaday.com/2015/07/10/build-an-amazon-ec2-gaming-rig/

    PC gaming is better than console gaming. Now that we’ve said something controversial enough to meet the comment quota for this post, let’s dig into [Larry]’s Amazon EC2 gaming rig.

    If you have enough bandwidth and a low enough ping, you can replicated just about everything as an EC2 instance.

    [Larry] is using a Windows Server 2012 AMI with a single NVIDIA GRID K520 GPU in his instance. After getting all the security, firewall, and other basic stuff configured, it’s just a matter of installing a specific driver for an NVIDIA Titan. With Steam installed and in-home streaming properly configured it’s time to game.

    The performance [Larry] is getting out of this setup is pretty impressive. It’s 60fps, but because he’s streaming all his games to a MacBook Air, he’ll never get 1080p.

    The first version of [Larry]’s cloud-based gaming system was about $0.54 per hour. For the price of a $1000 battle station, that’s about 1900 hours of gaming, and for the price of a $400 potato, that’s 740 hours of gaming.

    Revised and much faster, run your own high-end cloud gaming service on EC2!
    http://lg.io/2015/07/05/revised-and-much-faster-run-your-own-highend-cloud-gaming-service-on-ec2.html

    Playing Witcher 3, a GPU-intensive game on a 2015 fanless Macbook

    I’ve written about using EC2 as a gaming rig in the past. After spending some time and getting all sorts of feedback from many people, I’m re-writing the article from before, except with all the latest and greatest optimizations to really make things better.

    Costs

    Believe it or not, it’s actually not that expensive to play games this way. Although you could potentially save moneys by streaming all your games, cost savings isn’t really the primary purpose. Craziness is, of course. :)

    $0.11/hr Spot instance of a g2.2xlarge
    $0.41/hr Streaming 10mbit at $0.09/GB
    $0.01/hr EBS storage for 35GB OS drive ($3.50/mo)

    You’re looking at $0.53/hr to play games this way. Not too bad.

    Using the pre-made AMI

    Lets face it, following all of the stuff above is a long, tedious process. Though it’s actually quite interesting how everything works, I’m sure you just want to get on the latest GTA pronto. As such I’ve made an AMI with everything above, including the optimizations.

    Reply
  7. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Windows Server On Google Compute Engine Hits General Availability — It still feels like an odd combination, but Google today announced that Windows Server support on Google’s Compute Engine platform has now hit general availability. With this, Cloud Engine users are now covered …

    Windows Server On Google Compute Engine Hits General Availability
    http://techcrunch.com/2015/07/14/windows-server-on-google-compute-engine-hits-general-availability/

    It still feels like an odd combination, but Google today announced that Windows Server support on Google’s Compute Engine platform has now hit general availability. With this, Cloud Engine users are now covered by Google’s Compute Engine SLA when they run their applications on Windows Server 2012 R2 and the older Windows Server 2008 R2.

    Support for the upcoming Windows Server 2016 release and the stripped-down Nano Server is already in the works, too.

    This also means developers can now use Google’s platform to run their Active Directory, SQL Server, SharePoint, Exchange and ASP.NET servers. Google offers Microsoft License Mobility for its platform, so Microsoft customers can move their existing software licenses from their on-premise deployments to Google’s cloud without having to pay any additional licensing fees.

    Reply
  8. Tomi Engdahl says:

    Emil Protalinski / VentureBeat:
    You can now disable downloading, printing, and copying for any file stored in Google Drive
    http://venturebeat.com/2015/07/14/you-can-now-disable-downloading-printing-and-copying-for-any-file-stored-in-google-drive/

    Google today rolled out a small but significant addition to Google Drive. In short, you now have more control over the content you distribute via the service: You can now disable downloading, printing, and copying for any shared file.

    The new option is available for any file stored in Google Drive, not just documents, spreadsheets, and presentations created with Google Docs. That means if you decide to upload, say, a PDF to Google Drive, you can lock it down before you share it with your friends or colleagues.

    Reply
  9. Tomi Engdahl says:

    Emil Protalinski / VentureBeat:
    Google Drive’s new plug-in lets Office for Windows users open and edit their Word, Excel, and Powerpoint documents stored in Drive

    Google Drive plugin for Microsoft Office lets you ‘use the apps you’re already comfortable with’
    http://venturebeat.com/2015/07/21/google-drive-plugin-for-microsoft-office-lets-you-use-the-apps-youre-already-comfortable-with/

    Google has launched something quite surprising today: Google Drive for Microsoft Office. That’s right, the company now offers a plugin that lets you edit Word, Excel, and PowerPoint documents stored in Google Drive using Microsoft Office.

    Reply
  10. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    IBM Acquires Database-As-A-Service Startup Compose — IBM today announced that it has acquired Compose, the Y Combinator-backed database-as-a-service startup originally known as MongoHQ. Financial terms of the acquisition were not disclosed.

    IBM Acquires Database-As-A-Service Startup Compose
    http://techcrunch.com/2015/07/23/ibm-acquires-database-as-a-service-startup-compose/#.b5imzi:A8Xo

    While Compose started out as a MongoDB database specialist, the company now offers services around MongoDB, Elasticsearch, RethinkDB, Redis and PostgreSQL. The overall idea behind Compose is to allow mobile and web developers to create their apps without having to worry about their database backends. Compose provisions the databases and then manages them for its customers (and scales them up and down as needed, too). Its users have access to a real-time dashboard to monitor their instances, which can be hosted on AWS, DigitalOcean and Softlayer. Now that Compose is part of IBM, it will likely soon support IBM’s Bluemix platform, too.

    “By joining IBM, we will have an opportunity to accelerate the development of our database platform and offer even more services and support to developer teams,”

    “Compose’s breadth of database offerings will expand IBM’s Bluemix platform for the many app developers seeking production-ready databases built on open source,”

    Reply
  11. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Google’s Nearline Cold Data Storage Service Hits General Availability, Adds On-Demand I/O
    http://techcrunch.com/2015/07/23/googles-nearline-cold-data-storage-service-hits-general-availability-adds-on-demand-io/

    Earlier this year, Google announced its new low-cost Nearline cold storage service and today, the company is taking it out of beta and making it generally available. Unlike some of its competitors like Amazon Glacier, where accessing data in cold storage can take hours, Nearline promises to make data in its archive available within seconds.

    Now that the service is out of beta, Google considers it to be ready for production and its SLA will cover. Google promises an uptime of 99 percent. That’s less than the 99.95 percent for products like Compute Engine, for example, but that’s part of the cost savings (together with the higher latency) that allow Google to offer Nearline — which still runs on the same infrastructure as all of Google’s other cloud computing services — at less than half the price than its standard cloud storage service.

    Reply
  12. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    AWS reports Q2 revenue of $1.8B, up 81% YoY, and $391M profit, up from $77M a year ago

    Amazon’s AWS Unit Reports Q2 Revenue Of $1.8B, $391M Profit
    http://techcrunch.com/2015/07/23/amazons-aws-unit-reports-q2-revenue-of-1-8b-391m-profit/

    mazon released its latest quarterly earnings report today and for only the second time since it launched its AWS cloud computing service, the company broke out earnings for that business unit. The company reported that AWS now accounts for $1.824 billion of its total revenue. That’s up significantly from the last quarter, when Amazon reported AWS net sales of $1.566 billion and up a whopping 81 percent from the $1.05 billion it reported in the year-ago-quarter.

    AWS profit in Q2 quarter was $391 million, a significant increase from the $77 million it reported for the year-ago-quarter and even the $265 million in the last quarter.

    Last year, Amazon CEO Jeff Bezos said AWS was on its way to becoming a $5 billion business. With revenue of $3.389 billion in the first six months of 2015, AWS is now on course to becoming more of a $6 billion business — and if sales continue to increase at this pace in the next six month, $7 billion is definitely a possibility.

    Amazon Posts Surprise Profit; Shares Soar
    http://www.bloomberg.com/news/articles/2015-07-23/amazon-sales-top-estimates-on-cloud-computing-customer-growth

    Amazon.com Inc. reported a surprise second-quarter profit on top of sales that beat analysts’ estimates, showing investors — as it has done before — that the Web retailer can make money when it puts the brakes on investments.

    Shares in Amazon jumped as much as 19 percent after it reported that revenue rose 20 percent to $23.2 billion, exceeding analysts’ average projection of $22.4 billion. Net income was $92 million, or 19 cents a share, the company said in a statement Thursday, compared with the prediction for a loss of 14 cents.

    “They are showing investors that if they want to deliver profits, they can,” said Michael Pachter, an analyst at Wedbush Securities Inc., who has the equivalent of a buy rating on the stock. “Amazon is a dominant online retailer, well on its way to becoming one of the world’s largest retailers.”

    Reply
  13. Tomi Engdahl says:

    Amazon Web Services:
    Get free training and learn to run Microsoft Windows in the AWS cloud — The AWS cloud is optimized for running your Windows-based applications and workloads. Visit us to learn more and get started.

    Get Trained on Amazon EC2 for Microsoft Windows Server
    http://aws.amazon.com/windows/windows-ec2-ll-learn-more-namer-lab-registration/?sc_channel=BA&sc_campaign=aws_webinar&sc_publisher=techmeme&sc_medium=sponsoredpost&sc_content=webinar&sc_category=webinar&sc_detail=tm_sponsoredpost_textlink&sc_matchtype=ls&sc_country=US&trkcampaign=global_2015_windows_ec2_ll&trk=BA_tm_textlink

    Reply
  14. Tomi Engdahl says:

    Finding the Perfect Fit
    https://blog.evernote.com/blog/2015/08/11/finding-the-perfect-fit/

    Several months ago, we launched the new and improved Evernote Premium. One of our goals was to eliminate any worries about how much you could put into your account. Our feeling was that the previous limit was too low for active use, so we chose to remove the limits altogether with the introduction of unlimited monthly uploads.

    Unfortunately, “unlimited” is such a powerful term that it ended up being both confusing and problematic. Almost instantly, people began using Evernote in a completely new way: mass file storage and backup.

    When people began using Evernote in this new unintended fashion, we started seeing a degradation in service quality. That’s not good for anyone.

    So, instead of the heady “unlimited”, we’re choosing a limit that’s well beyond the needs of 99.999% of our current user base: 10 GB per month. Over time, we expect that file sizes will continue to increase and, as they do, we’ll keep improving the service and increasing limits so that Premium users can continue to use Evernote without worry.

    Reply
  15. Tomi Engdahl says:

    Google Cloud Platform Blog:
    Announcing General Availability of Google Cloud Dataflow and Cloud Pub/Sub — By the time you are done reading this blog post, Google Cloud Platform customers will have processed hundreds of millions of messages and analyzed thousands of terabytes of data utilizing Cloud Dataflow, Cloud Pub/Sub, and BigQuery.

    Announcing General Availability of Google Cloud Dataflow and Cloud Pub/Sub
    http://googlecloudplatform.blogspot.fi/2015/08/Announcing-General-Availability-of-Google-Cloud-Dataflow-and-Cloud-Pub-Sub.html

    By the time you are done reading this blog post, Google Cloud Platform customers will have processed hundreds of millions of messages and analyzed thousands of terabytes of data utilizing Cloud Dataflow, Cloud Pub/Sub, and BigQuery. These fully-managed services remove the operational burden found in traditional data processing systems. They enable you to build applications on a platform that can scale with the growth of your business and drive down data processing latency, all while processing your data efficiently and reliably.

    Every day, customers use Google Cloud Platform to execute business-critical big data processing workloads, including: financial fraud detection, genomics analysis, inventory management, click-stream analysis, A/B user interaction testing and cloud-scale ETL.

    Today we are removing our “beta” label and making Cloud Dataflow generally available. Cloud Dataflow is specifically designed to remove the complexity of developing separate systems for batch and streaming data sources by providing a unified programming model. Based on more than a decade of Google innovation, including MapReduce, FlumeJava, and Millwheel, Cloud Dataflow is built to free you from the operational overhead related to large scale cluster management and optimization.

    A decade of internal innovation also stands behind today’s general availability of Google Cloud Pub/Sub. Delivering over a trillion messages for our alpha and beta customers has helped tune our performance, refine our v1 API, and ensure a stable foundation for Cloud Dataflow’s streaming ingestion, Cloud Logging’s streaming export, Gmail’s Push API, and Cloud Platform customers streaming their own production workloads — at rates up to 1 million message operations per second.

    Such diverse scenarios demonstrate how Cloud Pub/Sub is designed to deliver real-time and reliable messaging — in one global, managed service that helps you create simpler, more robust, and more flexible applications.

    Cloud Pub/Sub can help integrate applications and services reliably, as well as analyze big data streams in real-time. Traditional approaches require separate queueing, notification, and logging systems, each with their own APIs and tradeoffs between durability, availability, and scalability.

    Reply
  16. Tomi Engdahl says:

    Google’s Euro-cloud in lengthy disk degradation drama
    Compute Engine wasn’t at its best overnight and its disks are waking grumpy
    http://www.theregister.co.uk/2015/08/14/googles_eurocloud_in_extended_disk_degradation_drama/

    Users of Google Compute Engine’s Persistent Disks in the europe-west1-b region have endured an anxious few hours, as the service has experienced a lengthy brown-out.

    Persistent Disks are data stores that exist independently of virtual machines and retain data whatever the state of the VM. One can store data in a Persistent Disk and connect that Disk to a virtual machine of one’s choice. Google offers the disks in spinning rust and flashy variations.

    Google first reported the issue at 11:14 on Thursday, US Pacific time, which translates to 18:15PM GMT.

    Reply
  17. Tomi Engdahl says:

    Netflix Is Ready to Pull Plug on Its Final Data Center
    http://www.wsj.com/articles/netflix-is-ready-to-pull-plug-on-its-final-data-center-1439604288

    It would be one of the first big companies to run all of its information technology remotely, in what’s known as the public cloud

    Reply
  18. Tomi Engdahl says:

    Why SharePoint is the last great on-premises application
    http://www.cio.com/article/2970173/collaboration-software/why-sharepoint-is-the-last-great-on-premises-application.html

    While it seems like almost every piece of IT is moving to cloud these days, there are still plenty of reasons to keep SharePoint in your server room – where it belongs.

    At the Worldwide Partner Conference (WPC) last month in Orlando, we heard many of the same grumblings we’ve been hearing about Microsoft for years now: They don’t care about on-premises servers. They’re leaving IT administrators in the dust and hanging them out to dry while forcing Azure and Office 365 content on everyone. They’re ignoring the small and medium business.
    resume makeover executive
    IT Resume Makeover: How to add flavor to a bland resume

    Don’t count on your ‘plain vanilla’ resume to get you noticed – your resume needs a personal flavor to
    Read Now

    It’s hard to ignore this trend. It’s also true that the cost-to-benefit ratio continues to decrease to the point where common sense favors moving many workloads up to the cloud where you can transform capex and personnel expense to opex that scales up and down very easily.

    But SharePoint Server is such a sticky product with tentacles everywhere in the enterprise that it may well be the last great on-premises application. Let’s explore why.

    The cloud simply means someone else’s computer

    One clear reason is that SharePoint, for so many organizations, hosts a large treasure trove of content, from innocuous memos and agendas for weekly staff meetings to confidential merger and acquisitions documents. In most organizations, human resources uses SharePoint to store employee compensation analysis data and spreadsheets; executives collaborate within their senior leadership teams and any high-level contacts outside the organization on deals that are proprietary and must be secured at all times; and product planning and management group store product plans, progress reports and even backups of source code all within SharePoint sites and document libraries.

    No matter how secure Microsoft or any other cloud provider claims it can make its hosted instances of SharePoint, there will always be that nagging feeling in the back of a paranoid administrator’s head: Our data now lives somewhere that is outside of my direct control. It’s an unavoidable truth, and from a security point of view, the cloud is just a fancy term for someone else’s computer.

    Not even Microsoft claims that every piece of data in every client tenant within SharePoint Online is encrypted. Custom Office 365 offerings with dedicated instances for your company can be made to be encrypted, and governmental cloud offerings are encrypted by default, but a standard E3 or E4 plan may or may not be encrypted. Microsoft says it is working on secure defaults, but obviously this is a big task to deploy over the millions of servers they run.

    Nothing is going to stop the FBI, the Department of Justice, the National Security Agency or any other governmental agency in any jurisdiction from applying for and obtaining a subpoena to just grab the physical host that stores your data and walk it right out of Microsoft’s data center into impound and seizure. Who knows when you would get it back? Microsoft famously does not offer regular backup service of SharePoint, relying instead on mirror images and duplicate copies for fault tolerance

    Worse, you might not even know that the government is watching or taking your data from SharePoint Online.

    It’s tough for many – perhaps even most – Fortune 500 companies to really get their heads around this idea. And while Microsoft touts the idea of a hybrid deployment, it’s difficult and not inexpensive and (at least until SharePoint 2016 is released) a bit kludgy as well.

    It’s (sort of) an application development platform

    Some companies have taken advantage of SharePoint’s application programming interfaces, containers, workflow and other technologies to build in-house applications on top of the document and content management features. Making those systems work on top of Office 365 and SharePoint Online can be very difficult beast to tame.

    It’s a choice with less obvious benefits – there is lower-hanging fruit

    Email is still the slam dunk of cloud applications. Your organization derives no competitive advance, no killer differentiation in the marketplace from running a business email server like Microsoft Exchange. It is simply a cost center

    Secure email solutions exist now that encrypt transmissions and message stores both at rest and in transit, so security in the email space is much more mature than, say, hosted SharePoint. No wonder Exchange Online is taking off.

    SharePoint is not as clear a case here. While you might choose to put your extranet on SharePoint Online or host a file synchronization solution in the cloud

    Reply
  19. Tomi Engdahl says:

    Google loses data as lightning strikes
    http://www.bbc.com/news/technology-33989384

    Google says data has been wiped from discs at one of its data centres in Belgium – after it was struck by lightning four times.

    Some people have permanently lost access to their files as a result.

    A number of disks damaged following the lightning strikes did, however, later became accessible.

    Generally, data centres require more lightning protection than most other buildings.

    While four successive strikes might sound highly unlikely, lightning does not need to repeatedly strike a building in exactly the same spot to cause additional damage.

    Justin Gale, project manager for the lightning protection service Orion, said lightning could strike power or telecommunications cables connected to a building at a distance and still cause disruptions.

    “The cabling alone can be struck anything up to a kilometre away, bring [the shock] back to the data centre and fuse everything that’s in it,” he said.

    In an online statement, Google said that data on just 0.000001% of disk space was permanently affected.

    “Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain,” it said.

    The company added it would continue to upgrade hardware and improve its response procedures to make future losses less likely.

    Google Compute Engine Incident #15056
    https://status.cloud.google.com/incident/compute/15056#5719570367119360

    Google Compute Engine Persistent Disk issue in europe-west1-b

    From Thursday 13 August 2015 to Monday 17 August 2015, errors occurred on a small proportion of Google Compute Engine persistent disks in the europe-west1-b zone. The affected disks sporadically returned I/O errors to their attached GCE instances, and also typically returned errors for management operations such as snapshot creation. In a very small fraction of cases (less than 0.000001% of PD space in europe-west1-b), there was permanent data loss.

    ROOT CAUSE:

    At 09:19 PDT on Thursday 13 August 2015, four successive lightning strikes on the local utilities grid that powers our European datacenter caused a brief loss of power to storage systems which host disk capacity for GCE instances in the europe-west1-b zone. Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain. In almost all cases the data was successfully committed to stable storage, although manual intervention was required in order to restore the systems to their normal serving state. However, in a very few cases, recent writes were unrecoverable, leading to permanent data loss on the Persistent Disk.

    This outage is wholly Google’s responsibility.

    Reply
  20. Tomi Engdahl says:

    Are you using the cloud as your time capsule?
    http://www.edn.com/electronics-blogs/power-points/4440207/Are-you-using-the-cloud-as-your-time-capsule–?_mc=NL_EDN_EDT_EDN_today_20150825&cid=NL_EDN_EDT_EDN_today_20150825&elq=ad8531cbe6f444cf8cf1a5098f9843ee&elqCampaignId=24508&elqaid=27702&elqat=1&elqTrackId=20a6250d8f4843b7a527faff710e5e3a

    The electronics industry is not immune to marketing hype or optimism, of course. Right now, our three hot buttons are “IoT,” the “cloud,” and “big data.” When you are not sure what to say, just work one or more of these three phrases into your pitch or response and you should be all set, at least for a while.

    While I understand the potential market and even end-application benefits of IoT – although not to the “it will be bigger than everything and solve every problem known” level of hype that IoT-related opportunities are made out to be – I am much more ambivalent about the cloud and, to a lesser degree, big data. I am not really sure why an application which is touted as cloud-based (such as CAD, CAE, CAM FAE, or Spice design/modeling tools) is inherently superior to one which is not, especially if the non-cloud application supports connectivity and file sharing.

    It might seem that storing your precious family photos, videos, and other data in the cloud would eliminate or at least minimize these potential problems, but then I thought about it for a while. If you use a cloud-based storage service, there are many things that can happen:

    The cloud service can go bankrupt and the contents can disappear; you may be notified about this, or perhaps not, or the notification is sent to a defunct email address that no one knows to check
    Others in your family may lose track of which cloud service you are using
    In years to come, someone will forget/not know about paying the service-storage fee
    The sign-on ID and password may be “lost” at your end (even if you have them written down, will people know where to find where you have written it?); some cloud services make it very hard to re-gain access to an account if you don’t have the log-in information (if it’s part of an inherited estate, a court directive may be needed)
    The stored formats may no longer be readable. (Who can say that PDFs, JPEG, Word, and other formats will be decodable in decades to come?)

    I know that many of these concerns are not new or unique to the cloud, of course. There are many credible reports of important corporate and scientific data from pre-cloud era which are now lost or unreadable.

    Reply
  21. Tomi Engdahl says:

    Are you using the cloud as your time capsule?
    http://www.edn.com/electronics-blogs/power-points/4440207/Are-you-using-the-cloud-as-your-time-capsule–?_mc=NL_EDN_EDT_EDN_today_20150825&cid=NL_EDN_EDT_EDN_today_20150825&elq=ad8531cbe6f444cf8cf1a5098f9843ee&elqCampaignId=24508&elqaid=27702&elqat=1&elqTrackId=20a6250d8f4843b7a527faff710e5e3a

    The electronics industry is not immune to marketing hype or optimism, of course. Right now, our three hot buttons are “IoT,” the “cloud,” and “big data.” When you are not sure what to say, just work one or more of these three phrases into your pitch or response and you should be all set, at least for a while.

    I am not really sure why an application which is touted as cloud-based (such as CAD, CAE, CAM FAE, or Spice design/modeling tools) is inherently superior to one which is not, especially if the non-cloud application supports connectivity and file sharing

    It might seem that storing your precious family photos, videos, and other data in the cloud would eliminate or at least minimize these potential problems, but then I thought about it for a while. If you use a cloud-based storage service, there are many things that can happen:

    The cloud service can go bankrupt and the contents can disappear; you may be notified about this, or perhaps not, or the notification is sent to a defunct email address that no one knows to check
    Others in your family may lose track of which cloud service you are using
    In years to come, someone will forget/not know about paying the service-storage fee
    The sign-on ID and password may be “lost” at your end (even if you have them written down, will people know where to find where you have written it?); some cloud services make it very hard to re-gain access to an account if you don’t have the log-in information (if it’s part of an inherited estate, a court directive may be needed)
    The stored formats may no longer be readable. (Who can say that PDFs, JPEG, Word, and other formats will be decodable in decades to come?)

    I know that many of these concerns are not new or unique to the cloud, of course. There are many credible reports of important corporate and scientific data from pre-cloud era which are now lost or unreadable.

    The film industry, which places great dollar value on their archives, understands the problem and risk. Although filming and editing have largely gone digital, and even theater projection is going that way at a rapid rate (80% are now digital), they still store archival copies of movies on high-quality analog film, in climate-controlled vaults.

    What’s your take on the cloud as the answer to viable, retrievable long-term storage?

    Comment:
    Recommend that whatever you put in the cloud consider the data as temporal and you have no problem if it is gone – ie. vapourware. I have a concern about the risk of putting personal data in the cloud given the number of security issues that occur with various IT systems.

    Reply
  22. Tomi Engdahl says:

    Channel surfers and the irresistible rise of Content Delivery Networks
    When load balancing just won’t cut it
    http://www.theregister.co.uk/2015/08/28/content_delivery_networks_why_load_balancing/

    Clearly, investors see plenty of demand for CDNs and for new and independent providers. But why?

    When you’re delivering content online, speed is king. You can be offering the best website or service in the world, but in our always-on, instant-access consumer culture waiting is beyond unacceptable.

    If your content isn’t instant, in that five/ten/fifteen seconds (or worse) the chances are a good portion of your audience has gotten sick of waiting and gone elsewhere.

    Your content may be good, but be honest with yourself: is it so good it will defy even the shortest of attention spans? Didn’t think so. Then you may well benefit from a CDN.

    A CDN (sometimes called a Content Distribution Network) is a specialist infrastructure, network or system for the high performance delivery of information and services.

    How you achieve it may vary (and we’ll get onto that later). It might comprise a super-low latency routing platform or exceedingly intelligent load-balancing, backing onto a vast and distributed – or at least flexible and scalable – server infrastructure, and will probably offer some form of DDoS mitigation due to its capacity and scale, but the net result is fundamentally the same: you want to get whatever you’ve got, to whoever needs to receive it, as quickly as possible.

    Most of us are using CDNs every day, and probably don’t even realise it.

    If you’ve downloaded a piece of software from Adobe or Apple, then you’ve been using the Akamai Content Delivery Network. AMD use Akamai for delivering driver updates, Rackspace Cloud Files runs over Akamai for its Dropbox-type services, and Trend Micro even operates its House Call on-demand remote virus scanning from it.

    Then there’s the video and audio content that have used Akamai over the years: BBC iPlayer, China Central Television, ESPN, MIT Open Courseware, NASA, NBC Sports and even the Whitehouse use the Akamai Content Delivery Network – yes, even Barrack Obama uses a CDN – to deliver Presidential webcasts while the 2009 Presidential Inauguration was delivered by Limelight Networks.

    Limelight Networks has delivered some similarly ostentatious events for MSNBC including the 2008 and 2012 Summer Olympic Games, provided the backbone delivery platform for other major sporting events (including the Six Nations rudby and the Wimbledon tennis championships), and content for Facebook and Netflix.

    CDNs aren’t just big boys toys for high bandwidth systems, though; there are a number of free options out there to speed up static web delivery, and WordPress even offers their own in-house CDN for the fast delivery of images and videos on WordPress blogs.

    Thinking back then, you may have already used a CDN today, and will probably be using one later. You might even be using one right now. But how do they work?

    On their most simplistic, basic level, a CDN is concerned with delivering some form of content to you as quickly as possible.

    Reply
  23. Tomi Engdahl says:

    CIO enjoys dual role of cloud strategist and company pitchman
    http://www.cio.com/article/2980183/cio-role/cio-enjoys-dual-role-of-cloud-strategist-and-company-pitchman.html

    VMware’s Bask Iyer spends his days refining the company’s private cloud system, which he uses to sell the software maker’s technology to fellow CIOs.

    Reply
  24. Tomi Engdahl says:

    Artificial intelligence, filesystems, containers … Amazon showers cloud gold on devs
    Google, who?
    http://www.theregister.co.uk/2015/04/09/amazon_sfo_summit_2015/

    Reply
  25. Tomi Engdahl says:

    Is ‘MetaPod’: a) a Pokemon; b) servers running OpenStack?
    Cisco replacing ‘respiratory disease’ with ‘a useless person’
    http://www.theregister.co.uk/2015/09/07/borg_lops_toolong_product_name_to_metapod/

    We only break out LogoWatch for special occasions, and this is surely counts: Cisco has decided that its OpenStack Private Cloud is a tad dull, uses too many words and, worst of all, gave rise to jokes.

    As of now, The Borg wants users to kindly remember that the correct name for Cisco OpenStack Private Cloud is “Cisco MetaPod”.

    Ciscan Niki Acosta (an OpenStack evangelist) got the joyful job of blogging about the name change here.

    The product name apparently bumped against The Borg’s branding guidelines

    “Metacloud OpenStack was a recognised distribution of OpenStack, but when we were acquired, we were able to transfer the distribution rights to Cisco.”

    Alas, the resulting Cisco OpenStack Private Cloud was too long, people abbreviated it to COPC, and that “sounds like a respiratory disease”.

    Cisco OpenStack® Private Cloud is now Cisco Metapod™
    http://blogs.cisco.com/cloud/cisco-openstack-private-cloud-is-now-cisco-metapod

    Reply
  26. Tomi Engdahl says:

    Microsoft in SaaS-y cloud data security slurp
    Cloud DLP and audit concern Adallom lets Redmond hug it into new phase of existence
    http://www.theregister.co.uk/2015/09/09/microsoft_in_saasy_cloud_security_slurp/

    Microsoft has acquired cloud security outfit Adallom.

    Adallom was founded in 2012 and follows the “R&D in Israel, sales in Silicon Valley” template for a range of data security products for clouds. The company’s wares bring data loss prevention and reporting to cloud storage services, offering users the chance to see just who’s accessed what data and to set policies to control that sort of thing.

    Microsoft says it acquired the company because it “expands on Microsoft’s existing identity assets, and delivers a cloud access security broker, to give customers visibility and control over application access as well as their critical company data stored across cloud services.”

    Reply
  27. Tomi Engdahl says:

    Stuck in Amazon’s web tentacles yet? You will be soon
    Market dominance means AWS is hard to avoid, and even harder to quit
    http://www.theregister.co.uk/2015/09/10/amazon_myth_of_cloud_portability_aws/

    Cloud portability desperately wants to be a thing, but there’s a far greater force pushing against it. It’s called Amazon Web Services (AWS), and chances are you’re already stuck.

    Oh, sure, you can cling to containers as a way out, as Bloomberg’s Olga Kharif recently wrote.

    But containers won’t help you. Not when you’ve given yourself so willingly to AWS platform-as-a-service (PaaS) and other offerings, as detailed by Mark Campbell on the Unitrends blog.

    By introducing a torrential downpour of services, AWS is effectively ratcheting up the switching costs associated with a jump to rival clouds like Microsoft Azure. It’s clever, and it’s working.

    We tend to think of Amazon Web Services as an Infrastructure-as-a-Service (IaaS) vendor, and it is. AWS dominates the industry with EC2 (compute), S3 (storage), and more cloud infrastructure services.

    In part, this is because Amazon continuously drops the cost of already market-leading offerings

    But, really, AWS’ dominance was never about mere infrastructure. Or price.

    If we look at the dizzying array of services AWS has introduced just in 2015, it’s clear that we’re not just talking about IaaS, and that AWS seems to be able to invest heavily in innovation even as prices drop.

    Cloud luminary Randy Bias nails this point: “The real economies of scale that are relevant here are the tremendous investments in R&D that have led to technological innovations that directly impact the cost structures of Amazon Web Services.”

    In other words, AWS uses its dominance not necessarily to drive fatter profit margins for itself by buying cheaper hardware, etc, but rather to innovate new PaaS and other products that further cement its hegemony.

    Did you catch that? AWS services reinforce each other. Vendors can talk about moving apps from cloud to cloud by packaging them in containers, but that (IaaS) cloud portability founders on the rocks of (PaaS) lock-in. It’s only going to get better (or worse). But AWS isn’t content with IaaS. Or PaaS. The company started with IaaS, and complements it with PaaS, but the company has been expanding beyond these relatively basic services for some time.

    As Amazon CTO Werner Vogels insists, AWS is in: “The business of pain management for enterprises,” and there’s no shortage of pain to manage:

    This is a business that will be as big as our retail businesses if not bigger … It took us six years, or until 2012, to get to one trillion objects stored.

    Then it took us one more year to get to two trillion. So that’s an indication of the speed of growth. To my eyes, that it only took a year to get to two trillion, it looks like the onset of a hockey stick.

    That “pain management” continues to show up as cloud services we’d normally associate with IaaS or PaaS, but it has also branched out into things like Device Farm (a way to simulate a horde of disparate devices for testing mobile applications), WorkDocs (document collaboration), WorkMail (an email and calendaring solution meant to challenge Microsoft Exchange) and a host of others (analytics, data warehousing, machine learning, push messaging, and more).

    Reply
  28. Tomi Engdahl says:

    Google Partners With CloudFlare, Fastly, Level 3 And Highwinds To Help Developers Push Google Cloud Content To Users Faster
    http://techcrunch.com/2015/09/09/google-partners-with-cloudflare-fastly-level-3-and-highwinds-to-help-developers-push-google-cloud-content-to-users-faster/

    Google shut down its free PageSpeed service last month and with that, it also stopped offering the easy to use content delivery network (CDN) service that was part of that tool. Unlike some of its competitors, Google doesn’t currently offer its own CDN service for developers who want to be able to host their static assets as close to their users as possible. Instead, the company now relies on partners like Fastly to offer CDN services.

    Today, it’s taking these partnerships a step further with the launch of its CDN Interconnect. The company has partnered with CloudFlare, Fastly, Highwinds and Level 3 Communications to make it easier and cheaper for developers who run applications on its cloud service to work with one of these CDNs.

    cloud_interconnect_partnersThe interconnect is part of Google’s Cloud Interconnect Service that lets businesses buy network services that let them connect to Google over enterprise-grade connections or to directly peer with Google at its over 70 global edge locations.

    Developers who use a CDN Interconnect partner to serve their content — and that’s mostly static assets like photos, music and video — are now eligible to pay a reduced rate for egress traffic to these CDN locations.

    Google says the idea here is to “encourage the best practice of regularly distributing content originating from Cloud Platform out to the edge close to your end-users. Google provides a private, high-performance link between Cloud Platform and the CDN providers we work with, allowing your content to travel a low-latency, reliable route from our data centers out to your users.”

    Reply
  29. Tomi Engdahl says:

    HP overtakes Cisco in cloud infrastructure revenues
    Duo soak up quarter of the market
    http://www.theregister.co.uk/2015/09/10/hp_overtakes_cisco_in_cloud_infrastructure_revenues/

    HP sells more cloud infrastructure equipment than anyone else, including Cisco, which was shunted into second place for the first time in Q2, 2015.

    But we learn from its summary, released September 9, 2015, that servers, OS, storage and networking collectively account for 89 per cent of cloud infrastructure revenues. The rest of the money goes on cloud security, cloud management and virtualisation revenues.

    Other major players cited by Synergy are Microsoft, Dell, IBM, EMC, VMware, Lenovo and Oracle.

    As we all know, Cisco dominates networking equipment sales and is doing well in servers, while HP dominates cloud servers, a bigger market, and is ‘a main challenger’ in storage. Microsoft ranks highly by dint of its server OS and virtualization applications, and Dell and IBM score well across a “range of cloud technology markets”.

    Reply
  30. Tomi Engdahl says:

    Ron Miller / TechCrunch:
    Salesforce Announces New Internet of Things Cloud, As Dreamforce Opens — When you think of Salesforce.com, you probably don’t think about the burgeoning Internet of Things, but Salesforce wants to help customers make sense of all of the data coming from the growing number of connected devices …

    Salesforce Announces New Internet of Things Cloud, As Dreamforce Opens
    http://techcrunch.com/2015/09/15/salesforce-announces-new-internet-of-things-cloud-as-dreamforce-opens/

    When you think of Salesforce.com, you probably don’t think about the burgeoning Internet of Things, but Salesforce wants to help customers make sense of all of the data coming from the growing number of connected devices — often referred to as the Internet of Things.

    The company is announcing its brand new Salesforce Internet of Things Cloud at Dreamforce, its huge customer conference, which opens today in San Francisco. The new cloud is built on the all-new Thunder platform.

    Regardless, it’s placing a big bet on the Internet of Things. It sees a connection between the customer and all of this data being generated by devices and various other sources and the company wants to help customers begin to capture and make sense of this growing amount of information.

    “We are watching the increasing volume of data coming off of connected devices, and we are thinking about how we can help customers deal with those massive amounts of data,” Dylan Steele, senior director of product marketing for the App Cloud at Salesforce told TechCrunch.

    But it’s not just device data, they want to capture with this new tool. It’s data coming from apps, social streams, web data, weather data — in short, anything that can help companies build a more complete picture of their customers.

    The IoT Cloud also promises to ingest more elaborate data from the Industrial Internet of Things sending information from factories, warehouses, wind turbines, jet engines and similarly complex systems that have been equipped with sensors.

    While this might seem the realm of others like GE Predix or perhaps Cloudera, Hortonworks or other software designed specifically to process big data, Salesforce believes it has a role here, particularly because the data is not locked into Salesforce. It can be exported an used in another tool, Steele explained — although how easy that will be remains to be seen.

    Reply
  31. Tomi Engdahl says:

    Tencent targets Alibaba, seeds data cloud with £1 BEELLLION
    Chinese tech giants strap on the boxing gloves
    http://www.theregister.co.uk/2015/09/16/tencent_ploughs_billions_into_the_cloud/

    Chinese megacorp Tencent is to pump $1.57bn (£1bn) into cloud computing tech over the next five years, in a bid to catch up with rival Alibaba.

    The money will go into infrastructure, operations, hiring talent and marketing its services, Dowson Tong, senior executive veep of Tencent, said at an industry forum on Tuesday, CNBC reported.

    Tencent is most well known for its WeChat messaging and payments app, which the company says has 500 million active users.

    But the company has plans to open data centres in China and North America over the next few years.

    The announcement comes hot on the heels of Alibaba announcing it will pump $1bn (£637m) into its cloudy arm Aliyun.

    Aliyun has data centres in Beijing, Hangzhou, Qingdao, Hong Kong, Shenzhen and Silicon Valley, with a data centre in Dubai currently under construction.

    Reply
  32. Tomi Engdahl says:

    AWS cuts cloud storage price to UNDER a cent per gig a month
    Glacier now charging just $0.007 for frozen storage
    http://www.theregister.co.uk/2015/09/17/aw_cuts_cloud_storage_price_to_under_a_cent_per_gig_a_month/

    Amazon Web Services (AWS) has lowered prices again, this time dropping the fee for its archival Glacier storage below a cent per gigabyte per month to $0.007 per gig per month.

    The price cut is only applicable in some of AWS’ regions, for now, but at that price, and with Glacier’s deliberately slow restore times, who cares about a little latency if those regions aren’t the closest to you?

    The cloudy concern has also introduced a new “S3 Standard – Infrequent Access (Standard – IA)” product that offers a 99 per cent availability service level. That’s in contrast to S3 Standard storage’s SLA that sees Amazon hand out service credits if availability drops below 99.9 per cent. AWS isn;t mentioning long retrieval times, so this looks like a just-about-realtime retrieval rather than Glacier’s hurry-up-and-wait retrieval plan.

    The new Standard-IA tier starts at $0.0125/gig/month, but you need to use it for at least 30 days. There’s also a $0.01/gig retrieval charge.

    AWS’ lifecycle management tools are hip to the new tier, so you can create an object in S3 Standard, set a policy to shunt it down into the IA tier and then to Glacier once retrieval times aren’t an issue.

    Reply
  33. Tomi Engdahl says:

    Oracle: Over here, look over here! At the cloud! No, not at our glum licensing numbers
    Hurd n’ Catz talk up fast-growing cloud business as other units see drops
    http://www.theregister.co.uk/2015/09/16/oracle_quarter_cloud_bluster/

    Enterprise IT giant Oracle is once again pointing to a growing cloud business to gloss over lackluster financial numbers in other parts of its business.

    Big Red on Wednesday said that its $8.4bn in first-quarter [PDF] revenues (ending August 31) were down 2 per cent over the same period in 2014, when it logged $8.5bn. Earnings per share were 53 cents, just above analyst estimates of 52 cents.

    Oracle stock was down 1.54 per cent in after-hours trading.

    In reporting the numbers, Oracle co-CEOs Safra Catz and Mark Hurd played up the continued growth of Oracle’s cloud operations. Cloud SaaS and PaaS revenues were up 34 per cent at $451m on the quarter, while cloud IaaS was up 16 per cent with a $160m take.

    “We feel very good about the progress of our cloud transition and clearly customers are migrating to the cloud,”

    The rising cloud figures, however, were tempered by drops in other areas of Oracle’s business. Software license updates and product support, which accounted for a whopping $4.69bn of Oracle’s $8.4bn total quarterly revenues, saw a 1 per cent drop from the same period a year ago.

    Reply
  34. Tomi Engdahl says:

    The Problem With Putting All the World’s Code in GitHub
    http://www.wired.com/2015/06/problem-putting-worlds-code-github/

    The ancient Library of Alexandria may have been the largest collection of human knowledge in its time, and scholars still mourn its destruction. The risk of so devastating a loss diminished somewhat with the advent of the printing press and further still with the rise of the Internet. Yet centralized repositories of specialized information remain, as does the threat of a catastrophic loss.

    Take GitHub, for example.

    GitHub has in recent years become the world’s biggest collection of open source software. That’s made it an invaluable education and business resource. Beyond providing installers for countless applications, GitHub hosts the source code for millions of projects, meaning anyone can read the code used to create those applications. And because GitHub also archives past versions of source code, it’s possible to follow the development of a particular piece of software and see how it all came together. That’s made it an irreplaceable teaching tool.

    The odds of Github meeting a fate similar to that of the Library of Alexandria are slim.

    Reply
  35. Tomi Engdahl says:

    Edward Vielmetti / Vacuum weblog:
    AWS DynamoDB downtime in North Virginia data center affected Docker, Heroku, Netflix, Pocket, Medium, Viber, IMDb, SocialFlow, Buffer, other services — AWS DynamoDB downtime, Sunday am, September 20, 2015 … Amazon Web Services DynamoDB experienced downtime in the N Virginia availability …

    AWS DynamoDB downtime, Sunday am, September 20, 2015
    http://vielmetti.github.io/post/2015/2015-09-20-aws-dynamodb-downtime-sunday-am/

    A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable. Leslie Lamport, 1987

    When core infrastructure goes down, it tends to affect other platforms that depend on that core infrastructure and that hide it from their users. This in turn affects applications built on those platforms.

    Reply
  36. Tomi Engdahl says:

    Skype and Amazon fell – “The whole business with rotation of the cloud is soooo good idea”

    According to Skype, the service “is being a little overwhelmed.” The company says on Twitter that it is aware of the problem and seeks to correct it as soon as possible.

    Last night there were still problems in the Amazon in Web Services and with it a number of other online services, such as Netflix. Cloud services, problems have raised questions about the operational reliability.

    “Yeah, the whole business with rotation of the cloud is really sooooo much better idea,” commented Indeed, a user on Twitter.

    Source: http://www.tivi.fi/Kaikki_uutiset/skype-ja-amazon-kaatuivat-koko-bisneksen-pyorittaminen-pilvessa-onkin-niiiin-hyva-idea-3483921

    Reply
  37. Tomi Engdahl says:

    AWS outage knocks Amazon, Netflix, Tinder and IMDb in MEGA data collapse
    Cloudopocalypse stalks Sunday sofa surfers
    http://www.theregister.co.uk/2015/09/20/aws_database_outage/

    Amazon’s Web Services (AWS) have suffered a monster outage affecting the company’s cloudy systems, bringing some sites down with it in the process.

    The service disruption hit AWS customers including Netflix, Tinder and IMDb, as well as Amazon’s Instant Video and Books websites.

    The outage may also explain Airbnb’s current service woes. Airbnb is an AWS customer.

    At time of publication, Amazon had coughed to data faults being reported on multiple services at its North Virginia US-EAST-1 site – which is the company’s oldest public-cloud facility.

    Some of the online retail giant’s cloudy services were having a miserable Sunday.

    It said that AWS services including CloudWatch (a monitoring system for the apps that run on the platform), Cognito (a service that saves mobile data) and DynamoDB (the company’s NoSQL database) were all having a bit of a sit down today.

    Amazon said it was recovering from the database blunder, but as part of the fix the company was forced to throttle APIs to recover the service.

    Reply
  38. Tomi Engdahl says:

    Backblaze to sell cloud storage for a quarter the price of Azure, Amazon S3
    Using consumer-grade hard drives in its data center keeps prices low.
    http://arstechnica.com/information-technology/2015/09/backblaze-to-sell-cloud-storage-for-a-quarter-the-price-of-azure-amazon-s3/

    Online backup provider Backblaze is branching out today with a new business: an infrastructure-as-a-service-style cloud storage API that’s going head to head with Amazon’s S3, Microsoft’s Azure, and Google Cloud Storage. But where those services charge 2¢ or more per gigabyte per month, Backblaze is pricing its service at just half a cent per gigabyte per month.

    Backblaze’s business is cheap storage.

    This low-cost storage means that the company can offer its $5/month unlimited size backup plan profitably. Now the company plans to sell that same cheap storage to developers. Its new B2 product is very much in the same vein as Amazon’s S3: cloud storage with an API that can be used to build a range of other applications.

    Amazon S3′s cheapest online storage—reduced redundancy, for customers storing more than 5 petabytes—costs 2.2¢ per gigabyte per month. Backblaze’s B2 storage costs 0.5¢ per gigabyte per month, with the first 10GB free. This is cheaper even than Amazon’s Glacier and Google’s Nearline storage, at 0.7 and 1¢ per gigabyte per month, respectively, neither of which supports immediate access to data. Bandwidth costs are the same; inbound bandwidth is free, outbound is charged at 5¢ per gigabyte.

    There are some limits to the B2 offering; Backblaze doesn’t have the multiple datacenter regions that Amazon and others offer, having only a single datacenter in California.

    Reply
  39. Tomi Engdahl says:

    Revealed: Why Amazon, Netflix, Tinder, Airbnb and co plunged offline
    And the dodgy database at the heart of the crash is suffering again right now
    http://www.theregister.co.uk/2015/09/23/aws_outage_explained/

    Netflix, Tinder, Airbnb and other big names were crippled or thrown offline for millions of people when Amazon suffered what’s now revealed to be a cascade of cock-ups.

    On Sunday, Amazon Web Services (AWS), which powers a good chunk of the internet, broke down and cut off websites from people eager to stream TV, or hookup with strangers; thousands complained they couldn’t watch Netflix, chat up potential partners, find a place to crash via Airbnb, memorize trivia on IMDb, and so on.

    Today, it’s emerged the mega-outage was caused by vital systems in one part of AWS taking too long to send information to another part that was needed by customers.’

    In technical terms, the internal metadata servers in AWS’s DynamoDB database service were not answering queries from the storage systems within a particular time limit.

    DynamoDB tables can be split into partitions scattered over many servers.

    At about 0220 PT on Sunday, the metadata service was taking too long sending back answers to the storage servers.

    At that moment on Sunday, the levee broke: too many taxing requests hit the metadata servers simultaneously, causing them to slow down and not respond to the storage systems in time. This forced the storage systems to stop handling requests for data from customers, and instead retry their membership queries to the metadata service – putting further strain on the cloud.

    It got so bad AWS engineers were unable to send administrative commands to the metadata systems.

    Other services were hit by the outage: EC2 Auto Scaling, the Simple Queue Service, CloudWatch, and the AWS Console feature, suffered problems.

    Reply
  40. Tomi Engdahl says:

    Google Launches Cloud Dataproc, a Managed Spark and Hadoop Big Data Service
    http://tech.slashdot.org/story/15/09/23/2247259/google-launches-cloud-dataproc-a-managed-spark-and-hadoop-big-data-service

    Google has a new cloud service for running Hadoop and Spark called Cloud Dataproc, which is being launched in beta today. The platform supports real-time streaming, batch processing, querying, and machine learning.

    Google Launches Cloud Dataproc, A Managed Spark And Hadoop Big Data Service
    http://techcrunch.com/2015/09/23/google-launches-cloud-dataproc-a-managed-spark-and-hadoop-big-data-service/

    Google is adding another product in its range of big data services on the Google Cloud Platform today. The new Google Cloud Dataproc service, which is now in beta, sits between managing the Spark data processing engine or Hadoop framework directly on virtual machines and a fully managed service like Cloud Dataflow, which lets you orchestrate your data pipelines on Google’s platform.

    Greg DeMichillie, director of product management for Google Cloud Platform, told me Dataproc users will be able to spin up a Hadoop cluster in under 90 seconds — significantly faster than other services — and Google will only charge 1 cent per virtual CPU/hour in the cluster. That’s on top of the usual cost of running virtual machines and data storage, but as DeMichillie noted, you can add Google’s cheaper preemptible instances to your cluster to save a bit on compute costs. Billing is per-minute, with a 10-minute minimum.

    Because the service uses the standard Spark and Hadoop distributions (with a few tweaks), it’s compatible with virtually all existing Hadoop-based products, and users should be able to easily port their existing workloads over to Google’s new service.

    In his view, Dataproc users won’t have to make any real tradeoffs when compared to setting up their own infrastructure.

    Dataproc is also integrated with the rest of Google’s cloud services, including BigQuery, Cloud Storage, Cloud Bigtable, Cloud Logging and Cloud Monitoring.

    Reply
  41. Tomi Engdahl says:

    Microsoft customers on the great (hybrid) cloud migration
    http://www.theregister.co.uk/2015/07/14/microsoft_customers_on_the_great_hybrid_cloud_migration/

    Microsoft’s enterprise customers are adopting its cloud software in droves, a survey of delegates at the vendor’s Ignite 2015 conference in Chicago reveals.

    Nine in ten of the 267 delegates polled said their organisations are utilising cloud-based technologies and 84 per cent are currently using Microsoft Cloud.

    Office 365 leads the pack with 58 per cent take-up among participants, followed in a close bunched of server tools:Hyper-V (35 per cent), System Center (34), Azure Public Cloud (32). Adoption for Windows Azure Pack for Windows Server stood at 14 per cent.

    So this is a snapshot of Microsoft shops – organisations who are invested enough in the company to send staff to a Microsoft-centric enterprise IT conference.

    Just over half the respondents (53 per cent) are pursuing hybrid cloud as their preferred strategy, followed by on-premise private cloud (23 per cent), third party private cloud (18 per cent) and all public cloud (6 per cent). Thirty seven per cent of delegates had a quarter or more of their IT services in the cloud – which sounds high to us.

    Reply
  42. Tomi Engdahl says:

    Mary Jo Foley / ZDNet:
    Microsoft details Azure Data Lake plans, says public preview is coming later this year

    Microsoft fleshes out its Azure Data Lake plans; readies public preview
    http://www.zdnet.com/article/microsft-fleshes-out-its-azure-data-lake-plans-readies-public-preview/

    Microsoft plans to make its Azure Data Lake store, analytics service and new query language — all derivatives of its internal ‘Cosmos’ technologies — available in public preview before year-end.

    Reply
  43. Tomi Engdahl says:

    Put all your eggs in one basket – and by eggs we mean GPU-heavy apps and by basket, Azure
    Microsoft adds N-series VMs to its cloud for Nvidia acceleration
    http://www.theregister.co.uk/2015/09/29/microsoft_azure_n_series_nvidia/

    Microsoft will add a new N-series of virtual machines to its Azure cloud that are boosted by Nvidia’s graphics accelerators.

    It’s going to be announced today that people spinning up Windows or Linux in the N-series VMs will be able to access virtualized GPUs in Azure using Nvidia’s GRID technology.

    These virtual machines will work in two ways: you can either access Nv’s Quadro-grade graphics chipsets to accelerate CAD and other desktop design software in the cloud; or run number-crunching applications on Telsa K80 GPU accelerators.

    Desktop software running in Azure can be piped over to your workstation across the internet; you’ll need to sustain a 10Mbps or so connection to Microsoft’s systems achieve a 1280-by-720 remote desktop at 30 frames per second, using Nvidia’s GRID tech, or 30Mbps to get up to 1080p.

    Amazon has touted Nvidia GPU acceleration in its AWS cloud for almost a couple of years now

    Reply
  44. Tomi Engdahl says:

    TechCrunch:
    Box announces Box Platform with new services, a 3D document viewer, a camera-syncing photo app Capture, and more

    Box Wants To Be The Center Of Your Company’s Content Universe
    http://techcrunch.com/2015/09/29/box-wants-to-be-the-center-of-your-companys-content-universe/

    Last week, at Dreamforce, Salesforce.com’s enormous customer conference, Salesforce personnel talked at great length about how everything they did was in the service of the customer. If it wasn’t about the customer — like ERP — they weren’t interested.

    In a similar way, today at BoxWorks at the Moscone Center in San Francisco, the Box team made it clear their mission was all in service of the content.

    To that end, Box made a series of announcements that enhance the Box platform and provide ways for you to share content, connect content to workflows, view different content types, work seamlessly across other cloud services and even operate as content services on the back end without it being apparent that it’s Box.

    One was a 3D viewer where users can share design documents, 3D printer designs or anything that needs to be viewed in 3D.

    In a nod to the healthcare industry, Box introduced a new DiCOM viewer. This may not sound terribly sexy, but it’s huge for the healthcare industry because this is the standard way of viewing medical images.

    These viewers are both built in HTML5 and don’t require any additional code or plug-ins, which means they are essentially cloud-native.

    In addition, they introduced Box Capture, a tool built using Box’s own mobile SDK, which connects your camera directly to the Box platform. Again, this may not sound terribly innovative, but consider that you can take a picture and share it directly to the Box platform and all that entails. That means, it inherits the rules of the folder where it is stored and gets distributed through any workflows associated with that folder automatically. What’s more employees can communicate around that tool in real time.

    Reply
  45. Tomi Engdahl says:

    Building a hybrid cloud? Then check out these Microsoft webinars
    Plug into these webinars and chant OMS
    http://www.theregister.co.uk/2015/09/30/microsoft_hybrid_cloud_webinars_promo/

    IT is moving at a furious pace. Are you trying to figure out how to manage hybrid clouds, deploy converged infrastructure and run open source workloads in the cloud?

    This Autumn Microsoft is running a series of webinars explaining just how all these building blocks fit together, whether you’re a megacorp, an SME or an up and coming startup.

    The webinar series will cover:

    Hybrid Cloud Management with OMS
    Converged Infrastructure
    What’s Coming in Windows Server 2016

    Reply
  46. Tomi Engdahl says:

    The case against Dropbox looks stronger with each passing day
    Files sinking
    http://www.theverge.com/2015/9/22/9372563/dropbox-really-is-a-feature

    At TechCrunch Disrupt in San Francisco on Monday, Houston appeared on stage to discuss the state of the business. Much has happened since Dropbox became one of the most highly valued companies of its generation. The cost of file storage, which Dropbox makes most of its money by charging for, has plummeted. Tech giants like Apple and Google, which once ignored file storage, built good-enough syncing options and began giving them away for cheap. Mobile operating systems evolved to hide the file system, making Dropbox look less like a command hub for your digital life and more like an arcane plumbing system. And new ways of working centered on messaging rather than file folders, led by fast-growing Slack, have raised new doubts about Dropbox’s importance to the future of work.

    “The most fragile of the private companies”

    It’s a set of circumstances that have led some to speculate that Dropbox will become “the first dead decacorn,” as startup founder Alex Danco wrote last month in a widely read piece. (“Decacorn” is Silicon Valley-speak for a company valued at $10 billion or more, a play on the use of “unicorn” to refer to billion-dollar startups.)

    All of this goes well beyond the usual criticisms lobbed at fast-growing startups: that they can’t keep up their pace of growth indefinitely, say, or that their business model will hit roadblocks as they attempt to begin profiting from their users. Rather, it’s that the fundamental assumptions around Dropbox’s business have shifted, at the same time that the behavior of office workers is changing to make the product less relevant.

    “What we’re really building is the world’s largest platform for collaboration,” he said. “We want to get to a place where it’s like any group of people in the world, any country, any company, using whatever technology they want, can work together without problems.” Criticism of the company, he said, was simply a result of “confusion.”

    And sure, it’s early days, although Dropbox first launched an offering for teams in 2011. But have you ever attempted to collaborate on Dropbox? Its feature set is tiny: You can add a comment to a file, or share it, or move it. You can call up previous versions of a file. And four years after launch, that’s … about it? Compared to a communication-focused service like Slack or video chat, it’s barely a step removed from printing the paper out and asking a colleague to make notes in the margins.

    At this point, Dropbox would point out that 130,000 businesses already pay for the professional version. And with hundreds of millions of people using Dropbox, the company will surely find its way into many more. But if their needs go much beyond basic file swapping, they’re likely to find themselves disappointed. And given the broad consensus that the real money is in businesses like theirs, they’ll likely wonder why Dropbox isn’t moving faster to meet them.

    Dropbox will likely disappoint big businesses

    But after years of investment and exploration, syncing files is still the only thing Dropbox does well. Steve Jobs knew this: he famously told Houston (while trying to acquire it) that his company was “a feature and not a product.” As Dropbox rocketed to 400 million users, Jobs’ viewpoint was easy to dismiss.

    There’s real value — billions in value, even! — in what Dropbox does, and it continues to employ some of the best engineers and product designers in Silicon Valley. But most companies never manage to do more than one thing well, which is why it’s important for them to focus on a problem people will pay lots of money to solve. No one syncs a file better than Dropbox — it was true four years ago, and it’s still true today. But as a “platform for collaboration,” it’s woefully underdeveloped — and well on its way to proving Jobs right.

    Reply
  47. Tomi Engdahl says:

    Microsoft pitches Azure at HPC, visualisation loads
    GPU-assisted clouds due for 2016 launch
    http://www.theregister.co.uk/2015/10/02/microsoft_pitches_azure_at_hpc_visualisation_loads/

    Microsoft is lobbing a bunch of GPUs into its Azure cloud to try and attract HPC-type workloads.

    The GPU-enabled VM option was one of two additions to the Azure lineup at its AzureCon conference – the other is Azure DV2, based on Intel’s Haswell processor.

    The GPU option, Azure N, targets remote visualisation and “compute intensive with Remote Direct Memory Access (RDMA)” workloads.

    As The Register’s HPC sister-site The Platform notes, Nvidia’s Tesla M60 GPUs (with Grid 2.0 virtualised graphics) target the visualisation market, while its K80s are for compute workloads.

    Exactly how this is integrated into Redmond’s existing kit hasn’t yet been detailed, but The Platform’s Timothy Prickett-Morgan takes a shot: “we think Microsoft is adding GPUs to its Xeon-based compute nodes by adding adjacent GPU trays that sit alongside the CPU trays in the Open Cloud Server.”

    Zander said the Azure N virtual machines’ RDMA will reduce latency between notes.

    Reply
  48. Tomi Engdahl says:

    Box now has 50,000 paying customers
    http://www.cloudpro.co.uk/collaboration/5393/box-now-has-50000-paying-customers

    File-sharing firm targets enterprise and developers with new platforms

    Box now boasts 40 million users and 50,00 paying customers, it revealed yesterday as it unveiled two new platform offerings targeting both developers and business users at its annual Boxworks conference.

    Those customers include 52 per cent of the Fortune 500 and others among them are the US Department of Justice, Toyota, AstraZeneca, Barneys New York, Legendary Pictures, General Electric and IBM.

    Box Platform

    The file-sharing firm also unveiled its Developer and Enterprise editions of its product, allowing users to build their own applications on the Box Platform to serve their own customers and partners.

    CEO Aaron Levie said: “The last decade of IT has focused on driving innovation and productivity inside organisations, but cloud and mobile are also completely redefining how businesses can deliver new services and experiences to customers and partners.

    “Now is the time for enterprises to build new digital experiences that transform their business, and at Box, we’re building the platform to power this transformation.”

    Reply
  49. Tomi Engdahl says:

    Amazon Web Services to Add Analytics
    Cloud-computing division enters field designed to make better use of collected data
    http://www.wsj.com/article_email/amazon-web-services-to-add-analytics-1443989402-lMyQjAxMTI1NzAyNDgwMjQwWj

    Amazon.com Inc. ’s cloud-computing division, Amazon Web Services, will wade into a hotly contested new territory this week when the company is expected to announce a new service to help businesses analyze their data, according to people familiar with the matter.

    Many companies already store proprietary data on AWS, which counts Netflix Inc., Airbnb Inc., Nike Inc. and Pfizer Inc. among its clients. That puts Amazon in a strong position to offer an add-on service, said Boris Evelson, an analyst at Forrester Research Inc. “This will be the new 800-pound gorilla in the [business intelligence] market,” he said, a market expected to be worth $143 billion in 2016, according to analyst Pringle & Co.

    Code named Space Needle, the new analytics service could help Amazon lock in AWS customers more tightly by housing more of their data on the platform.

    Reply
  50. Tomi Engdahl says:

    Ask Slashdot: Best Country For Secure Online Hosting?
    http://yro.slashdot.org/story/15/10/04/1748210/ask-slashdot-best-country-for-secure-online-hosting

    I’ve recently discovered that my hosting company is sending all login credentials unencrypted, prompting me to change providers. Additionally, I’m finally being forced to put some of my personal media library (songs, photos, etc.) on-line for ready access (though for my personal consumption only) from multiple devices and locations…

    And does anyone have a recommendation on which provider(s) are the best hosts for (legal) on-line storage there?

    Comments:

    There is no safe place to put your data. If someone wants it they’ll get it. If you want to keep something private, encrypt it.

    If you do not trust cloud providers for whatever reason, then DIY. A business class account with a static IP works best, but it can by done with dyndns, etc. Set up your server, and and a VPN to your network. OpenVPN clients are available for just about any device, and then you can access anything you are running inside your lan, UPNP, SMB shares, whatever. You can pick up a crappy firebox on ebay and load an alternate firmware in it for cheap (I got one for 5 bucks at a church yardsale). Or you can just port forward and run your VPN software on some boxen inside your router.

    My total cost is about $130 to comcast a month for a single static and business class 50/10, and my own time. This setup allows me to run whatever services I deem fit, and typically keeps me clear of ISP DCMA notices. I did get one, but once I pointed out that I repair random PCs that do not belong to may, and many may auto launch a torrent app, it was quickly dr

    Which country has the best on-line personal privacy laws that would made it patently illegal for any actor, state, or otherwise, to access my information?

    NONE. Zip. Zero. Nada.

    If you wish to secure what you host, then use a solution that encrypts it on the client side.

    I believe BitTorrent Sync is an example of that.

    Some hosting and online backup providers also offer solutions where every file is encrypted on the client side, and the hosting provider never gains access to the plaintext files…. this is what you need.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*