Who's who of cloud market

Seemingly every tech vendor seems to have a cloud strategy, with new products and services dubbed “cloud” coming out every week. But who are the real market leaders in this business? Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article shows Gartner’s Magic Quadrant for IaaS. Research firm Gartner’s answer lies in its Magic Quadrant report for the infrastructure as a service (IaaS) market.

It is interesting that missing from this quadrant figure are big-name companies that have invested a lot in the cloud, including Microsoft, HP, IBM and Google. The reason is that report only includes providers that had IaaS clouds in general availability as of June 2012 (Microsoft, HP and Google had clouds in beta at the time).

Gartner reinforces what many in the cloud industry believe: Amazon Web Services is the 800-pound gorilla. Gartner has also found one big minus on Amazon Web Services: AWS has a “weak, narrowly defined” service-level agreement (SLA), which requires customers to spread workloads across multiple availability zones. AWS was not the only provider where there was something negative to say on the service-level agreement (SLA) details.

Read the whole Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article to see the Gartner’s view on clould market today.

1,065 Comments

  1. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Google Launches Cloud Bigtable, A Very Fast NoSQL Database For The Enterprise

    Google Launches Cloud Bigtable, A Highly Scalable And Performant NoSQL Database
    http://techcrunch.com/2015/05/06/google-launches-cloud-bigtable-a-highly-scalable-and-performant-nosql-database/

    With Cloud Bigtable, Google is launching a new NoSQL database offering today that, as the name implies, is powered by the company’s Bigtable data storage system, but with the added twist that it’s compatible with the Apache HBase API — which itself is based on Google’s Bigtable project. Bigtable powers the likes of Gmail, Google Search and Google Analytics, so this is definitely a battle-tested service

    Google promises that Cloud Bigtable will offer single-digit millisecond latency and 2x the performance per dollar when compared to the likes of HBase and Cassandra. Because it supports the HBase API, Cloud Bigtable can be integrated with all the existing applications in the Hadoop ecosystem, but it also supports Google’s Cloud Dataflow.

    Setting up a Cloud Bigtable cluster should only take a few seconds, and the storage automatically scales according to the user’s needs.

    It’s worth noting that this is not Google’s first cloud-based NoSQL database product. With Cloud Datastore, Google already offers a high-availability NoSQL datastore for developers on its App Engine platform. That service, too, is based on Bigtable.

    Reply
  2. Tomi Engdahl says:

    Amazon Isn’t the Only One Killing It With Cloud Computing
    http://www.wired.com/2015/05/amazon-isnt-one-killing-cloud-computing/

    Eleven years later, the Uretskys are running a company, Digital Ocean, that hosts more public websites than almost any other company on earth. Amazon and its $4.6 billion cloud computing service host the most, and according to Netcraft, a British company that closely tracks technologies used on the net, Digital Ocean is number two, hosting over 163,000 sites (compared to more than 300,000 on Amazon).

    The Uretsky started out with an ordinary computers-for-rent company called Server Stack, something not too far removed from Like Whoa. But after Amazon pioneered the idea of cloud computing, letting people instantly access computing power via the net, they followed suit with Digital Ocean, which is now backed by more than $37 million in funding, including an investment from big-name Silicon Valley venture capital firm Andreessen Horowitz. The company’s ascent is yet another sign that cloud computing—though a rather simple concept—is rapidly overhauling the tech world.

    Amazon’s operation is set to pull in more than $6 billion this year, and according to Netcraft’s numbers, Digital Ocean is now growing at a faster clip than its big-name rival

    Digital Ocean isn’t complicated. Basically, it offers access to virtual machines where you can run pretty much any software you like. This is what Amazon does too. And Microsoft. And Google. And so many others.

    One difference is that Digital Ocean equips all its machines with solid state drives, or SSDs, data storage devices that are much faster than traditional hard disks. But other companies offer SSDs on at least some machines, and these high-speed devices are quickly becoming the norm in the data center world.

    The larger point is that Digital Ocean is a cloud service in purest sense of the term.

    Reply
  3. Tomi Engdahl says:

    NetSuite’s leap over to Azure cloud – a shot to the pills for AWS?
    The Microsoft love that dared to speak its name
    http://www.theregister.co.uk/2015/05/08/netsuite_aws_dump_for_azure/

    NetSuite, the ERP-as-a-service firm, is shifting its entire business – and therefore that of its customers – off Amazon’s Web Services (AWS) and onto Microsoft’s Azure.

    Zack Nelson’s firm plans to shift its 24,000 customers to Microsoft’s cloud as the basis for their hosted business by the end of 2015. Azure will become the platform not just of NetSuite’s customers, but also ISVs and developers building plug-ins to the cloud ERP service.

    NetSuite justified the move in terms of Azure’s powerful storage and compute capabilities and talked of Azure as a “world-class” cloud computing platform. But NetSuite isn’t stopping there: the conversion to Microsoft is wholesale.

    NetSuite is laying in the plumbing to make it easier for Microsoft customers to use its cloud in serious business environments. Specifically, there will be single sign-on to NetSuite from Windows apps using Azure Active Directory.

    Coming in the next months, too, is integration between Office 365 and NetSuite. The idea is to make it easier to shift data from NetSuite to Excel and Power BI for analysis. Oh, and NetSuite is also dumping its own existing collaboration set up for Office 365, too.

    It’s a fascinating development for many reasons.

    NetSuite started life as one of a gang of SaaS revolutionaries diametrically opposed to the old way of doing business, as epitomised by Redmond.

    Which begs the question, then, why did NetSuite jump?

    Founded in 1998, NetSuite is only slightly older than Salesforce, and with similar growth of between 30 and 40 per cent a quarter. It’s a growth rate comparable to the SaaS firms nipping at the heels of giants like Oracle and SAP, too.

    Cost is a factor when you’re a firm like NetSuite – a loss-maker, still, after 17 years – $100m and growing annually at 42 per cent year on year. You need to insulate yourself against the future.

    I’ll go out on a limb and bet, Microsoft is in sign-up mode: expect it to court others in the business SaaS space – Netsuite rival Workday, also on AWS today.

    NetSuite brings something else to Microsoft and Azure: a much needed PR and momentum black eye to AWS in what’s turning into a straight fight between these two for paying business customers. And it brings something more, too: raw customers and their data.

    We can also only guess at the complicated terms that have been agreed

    Reply
  4. Tomi Engdahl says:

    Jason Verge / Data Center Knowledge:
    Microsoft Invests In Several Submarine Cables In Support Of Cloud Services
    http://www.datacenterknowledge.com/archives/2015/05/11/microsoft-invests-several-submarine-cables-support-cloud-services/

    Microsoft is investing in several submarine cables to connect data centers globally and in support of growing data network needs. The latest investments strengthen connections across both the Atlantic and Pacific oceans, connecting several countries.

    Microsoft continues to significantly invest in subsea and terrestrial dark fiber capacity by engaging in fiber relationships worldwide. Better connectivity helps Microsoft compete on cloud costs, as well as improves reliability, performance and resiliency worldwide. The investments also spur jobs and local economies.

    Reply
  5. Tomi Engdahl says:

    Microsoft’s run Azure on Nano server since late 2013
    Redmond discovers the limits of cloud-first
    http://www.theregister.co.uk/2015/05/12/microsofts_run_azure_on_nano_server_since_late_2013/

    Microsoft’s only just announced its new Nano server, but has been using it in production on Azure since late 2013.

    So says D. Britton Johnston, CTO for Microsoft Worldwide Incubation, with whom The Reg chatted over the weekend.

    But Britt said that running the server on Azure has also taught Microsoft that what works in the cloud wont work on-premises. In Azure bit barns, he explained, Microsoft just shifts workloads to another server in the case of any hardware glitch. While businesses will build some redundancy into their systems, Microsoft’s tweaked Nano server and the Cloud Platform System converged hardware rigs it announced last year to recognise that businesses can’t just throw hardware at a problem.

    Cloud-first, it seems, only gets you so far on-premises.

    The newly-announced Azure Stack – an on-premises version of Azure – also reflects on-premises constraints. Britt explained that Azure Stack will represent one way to do private and/or hybrid cloud in Microsoft’s new way of thinking. If you want to base your rig on Windows Server, Hyper-V, System Centre and Virtual Machine Manager, feel free to do so.

    Reply
  6. Tomi Engdahl says:

    Low price, big power: Virtual Private Server picks for power nerds
    AWS not the boss of you? Try these VPS options…
    http://www.theregister.co.uk/2015/05/05/run_your_own_virtual_private_server/

    Running your own virtual private server (VPS) was once limited to either profitable side projects or those with money to burn. The relatively high monthly costs (often $40-$60 per month) made it too expensive for personal projects that didn’t generate income and more serious endeavors often used dedicated hardware, leaving VPS as a kind of no man’s land in the middle.

    That’s no longer the case. Today the VPS is both cheap enough for personal projects and reliable enough for serious ones. Competition has driven down prices dramatically, so much so, in fact, that spinning up your own VPS is often cheaper than renting web server space from a shared host. These days, you can get a VPS instance running for less than $5 per month. When it comes to price, VPS hosting is the new shared hosting.

    In some cases you get what you pay for, but in others, what you get is often much faster than you’d get for double the price in shared hosting. And with the VPS, you have an entire server at your command. Sure, it’s a virtual server: but in all but the most extreme use cases, it will perform nearly as well as dedicated hardware. In fact, many high-end VPS options sometimes outperform low-end dedicated hardware. The main advantage of dedicated hardware these days is that it offers total control.

    Aside from competition, the ever-lower prices in VPS hosting are also driven by the improvements in virtualisation software that have been coming in the last few years. There are three major types of virtualisation used behind the scenes of VPS hosting.

    Behind the scenes

    The first and oldest of the bunch is OpenVZ virtualisation. In theory, OpenVZ should be the fastest, but because OpenVZ makes it easy to oversell VPS space, a few bad apple providers out there have given OpenVZ something of a bad name. Most new VPS providers use KVM or Xen for virtualisation. In fact all the VPS hosts reviewed below use either KVM or Xen behind the scenes.

    As with shared hosting, it’s also possible for VPS hosts to seriously crowd server hardware to the point that your server slows to a crawl, which is why it pays to shop around (and possible avoid OpenVZ-based hosts, though again, OpenVZ isn’t to blame, it’s the hosts that abuse it which create the problem).

    Perhaps the biggest name in VPS hosting, at least among developers and start-ups, is Linode

    Linode is not the cheapest of the bunch, the company lacks an equivalent to the $5 per month plans found elsewhere, but in my experience it’s the fastest and most reliable.
    The cheapest Linode offering – which gets you a VPS with 1GB RAM, 1 CPU Core, 24 GB SSD Storage, 40 Gbit Network In, 125 Mbit Network Out and a 2 TB monthly transfer limit
    Linode also offers some advanced features you won’t find in the others, like the ability to use slightly less mainstream Linux distros like Slackware, Gentoo or Arch.

    Digital Ocean likes to credit its success to its focus on developers, which probably doesn’t hurt, but having the lowest price didn’t hurt either. Digital Ocean was one of the first reliable VPS hosts to offer a $5-a-month plan.
    Digital Ocean also offers one of the nicest control panels you’ll find in this space. The company is also remarkably fast at setting up new instances
    It’s worth noting that, while Digital Ocean offers a number of pre-configured set-ups – including popular apps like a basic LAMP stack, WordPress, ownCloud, GitLab and Ruby on Rails – it does not allow you to install custom operating systems, as you can with Linode.

    Another relative newcomer to the VPS game is Vultr, which just sprang up last year as a kind of Digital Ocean clone.
    Vultr also has a few plans designed for VPS-as-storage. These plans ditch the SSD in favor of slower, but much larger, spinning hard drives, primarily intended for use as cheap, off-site back-ups.

    Your choice of VPS host will depend on what you want to do. For mission-critical client hosting I still rely primarily on Linode. For personal projects or running non-public applications like OpenVPN or ownCloud, I’ve been using both Vultr and Digital Ocean.

    Reply
  7. Tomi Engdahl says:

    Google Google GOOGLE! Cloud cloud CLOUD! These prices are INNNSAAAANE!
    Act now to get your cheap computing fix at the Chocolate Factory
    http://www.theregister.co.uk/2015/05/18/google_cloud_prices/

    Google is cutting prices on its Cloud hosted computing service.

    The Mountain View Ad server said it would be offering lower prices on cloud compute instances and virtual machines.

    Under the new pricing model, cloud compute instances will be cut anywhere from 5 per cent (high CPU instances) to 30 per cent (micro CPU instances)

    “With Google Cloud Platform’s customer-friendly pricing model, you’re not required to make a long-term commitment to a price, machine class, or region ahead of time.”

    Reply
  8. Tomi Engdahl says:

    VMware beta testing database-as-a-service based on SQL Server
    ‘vCloud Air SQL’ will bring disaster recovery for your DB coming to Virtzilla’s vCloud Air
    http://www.theregister.co.uk/2015/05/22/vmware_beta_testing_databaseasaservice_based_on_sql_server/

    VMware is beta testing a database-as-a-service offering running SQL Server 2008 R2 & 2012.

    VMware documents sighted by The Register suggest the new service will be called “vCloud Air SQL” and will initially target test and dev scenarios, but also offer a warm failover option for those seeking a disaster recovery option for their on-premises databases.

    Virtzilla thinks that the service will also be able to handle non-critical workloads, but will in time grow to handle production apps.

    Reply
  9. Tomi Engdahl says:

    Benjamin Wallace-Wells / New York Magazine:
    Four years after ‘Jeopardy’ win, IBM’s Watson program has seen applications in 75 industries including finance, healthcare, molecular biology

    http://nymag.com/daily/intelligencer/2015/05/jeopardy-robot-watson.html

    Watson was just 4 years old when it beat the best human contestants on Jeopardy! As it grows up and goes out into the world, the question becomes: How afraid of it should we be?

    Reply
  10. Tomi Engdahl says:

    Death of a middleman: Cloud storage gateways – and their evolution
    Fading glory for Google, Microsoft and Amazon go-betweens?
    http://www.theregister.co.uk/2015/05/25/cloud_storage_gateways/

    For decades, we’ve survived quite nicely using on-premise storage. According to industry research, though, that may be changing as cloud-based storage emerges. A Tata Communications survey last year found that within ten years enterprises will store 58 per cent of their data in the cloud, compared with 28 per cent today.

    Whether or not the shift ends up being that drastic in practice, it does seem clear that companies are getting more interested in cloud storage. Google launched a cloud storage alternative to Amazon’s Glacier and Microsoft’s Azure Backup in March, for example. Clearly, it sees an opportunity to substitute cloud storage for tape.

    Whether you’re treating the cloud as an alternative to tape-based backup, or doing something more sophisticated with cloud storage, moving your data from an on-site array to the cloud isn’t always that simple.

    “Cloud storage works differently than your average network-storage array, in that it is object-oriented,” points out John Sloan, research director for infrastructure at analyst firm Info-Tech Research Group.

    Object-oriented storage is great for scaling out storage infrastructure, but it isn’t interoperable with the block storage typically seen on an on-site SAN. Another problem is that cloud-based storage is stateless, and typically accessed over web-friendly REST APIs. This is not the way that your legacy apps normally work.

    “Also, sending and retrieving data from the cloud introduces performance hurdles (latency and bandwidth limits),” said Sloan.

    Cloud storage gateways are designed to solve some of these issues. They’re typically devices, whether appliances or virtual machines, that sit on the local network and pretend to be a local storage array, while interfacing with cloud-based storage.

    Case study

    One CTERA customer, JWT, has a large presence in the Middle East, where in some regions the Internet can be slow and unreliable. During the Arab Spring uprisings, the firm had to move staff from several offices. It used a cloud storage gateway with a connection to Amazon Web Services, which enabled it to access data from the offices that were shut down.

    In that way, the cloud storage gateway doubles as a backup solution, and allowed the company to save 63 per cent on its old backup system, which was tape-based. A conduit for backup and disaster recovery isn’t the only use case for a cloud storage gateway. Some vendors, like Nasuni, focus on scalability, offering what looks like a NAS but with infinite capacity thanks to cloud-based infrastructure.

    “If you’re using a cloud gateway and you’re depending on particular performance, you have a lot of variables in there that you don’t have control over,” said Kern. “Latency is a factor of environment and the risk. If you’re saying ‘I need the data in a certain window’, then you have to architect the network connectivity based on that.”

    Market evolution

    Unless a company extends beyond simple format translation into other value-added services such as local data storage and centralised storage management, the cloud service market category may end up being temporary, say some. “My point of view is that the cloud gateway is going to evolve from a distinct network storage device to a feature of mainstream storage devices,” argued Sloan, explaining that this is a repeating cycle in the storage business. “A decade or so ago, we had these emerging features for storage like deduplication.”

    “Today backup software has become smarter about targeting disk (and doing deduplication) and the leading VTL makers have been bought by mainstream storage vendors,” he said.

    We’re already starting to see cloud gateway-style features making their way into higher-end boxes such as Hitachi’s HNAS. The question is: how long will it be before more large storage vendors ingest these functions into their kit, perhaps building out back-end cloud storage services specifically to support them?

    Reply
  11. Tomi Engdahl says:

    Private cloud has a serious image problem
    ‘AWS is to the era of cloud what Microsoft is to client/server’
    http://www.theregister.co.uk/2015/05/26/public_cloud_domination_game_aws_microsoft/

    As domination goes, it’s hard to surpass Amazon Web Services (AWS). According to recent Gartner data, AWS now offers 10X the utilised cloud capacity of the next 14 IaaS and PaaS providers… combined.

    For those paying attention, that’s double the dominance AWS established last year.

    And while public cloud spending remains a rounding error in the broader scheme of total IT spending, it’s growing at a 29.1 per cent compound annual growth rate, threatening to up-end legacy IT vendors.

    Which is just as it should be. Cloud spending is on a tear precisely because traditional vendors have failed to deliver the convenience that enterprise buyers increasingly demand.

    This leaves us with AWS assuming the role that Microsoft played for years. In the words of Hewlett Packard’s cloud chief Marten Mickos : “AWS is to the era of cloud what Microsoft was (and is) to the era of client/server.”

    According to Gartner, in 2014 the absolute growth of public cloud IaaS workloads exceeded the growth of on-premises workloads (of any type) for the first time. Including all forms of cloud (IaaS, PaaS, SaaS), IDC expects the cloud-computing market to more than double by 2018 to $127bn.

    The primary driver of all that growth? Convenience.

    As RedMonk analyst Stephen O’Grady posits: “Convenience trumps just about everything” when it comes to cloud adoption. With developers playing an increasingly important role in IT procurement decisions, appeasing them has become Job #1. Developer appeasement starts with open source and often ends with public cloud computing.

    Reply
  12. Tomi Engdahl says:

    The impact of open cloud technologies on IT
    Column Open source developments are at the heart of new cloud technologies, says Jim Zemlin
    http://www.theinquirer.net/inquirer/opinion/2410317/the-impact-of-open-cloud-technologies-on-it

    NOWHERE are we seeing more open source and collaborative development than in cloud computing.

    From software-defined networking to application development, containers and more, hundreds of open cloud projects are emerging to accelerate the development of transformative technologies that deliver products and services on demand, at the click of a button.

    Open source and collaborative development have been proven time and time again to increase the rate of development and to result in better software.

    The impact on the IT industry of these development practices for the cloud is a much faster evolution of the enterprise in the cloud era than any other time in the technology industry’s history.

    The industrial revolution took decades to mature with proprietary designs and pending patents for machinery, while the computer hardware era of the 1950s and 1960s didn’t materialise for the average business until the 1980s and 1990s.

    We know that today computers and information technologies double their capabilities every 12-18 months. Open source software and collaborative development are driving this cycle.

    The Linux kernel’s rate of development, for example, is unmatched. The latest data tells us that nearly eight changes are made to Linux every hour and that it’s being built faster than ever before.

    Projects like OpenStack, Cloud Foundry, CloudStack, Docker and others are using the same practices to move increasingly fast.

    Reply
  13. Tomi Engdahl says:

    Equinix to buy Telecity Group for $3.6 billion, Interxion deal collapses
    http://www.reuters.com/article/2015/05/29/us-telecity-m-a-equinix-idUSKBN0OE0GS20150529

    U.S. data center company Equinix Inc (EQIX.O) said on Friday it had agreed to buy British peer Telecity Group (TCY.L) in a deal worth 2.35 billion pounds ($3.6 billion) which ends Telecity’s pursuit of smaller Dutch firm Interxion Holding NV (INXN.N).

    Underpinning deal activity in the sector is the players’ plans to tap growing demand across new geographies for “cloud” technology, whereby the data and processing for devices like smartphones is carried out on millions of remote servers

    Reply
  14. Tomi Engdahl says:

    Rackspace’s ‘fanatical’ army drops in on rival clouds
    Listen up – there might be a new hope in this dreary post-OpenStack world
    http://www.theregister.co.uk/2015/05/12/people_are_rackspace_cloud/

    Rackspace is growing – just not fast enough for the Wall Street pack. Looks like it’s time to roll out the service troops to support rivals’ clouds.

    The firm’s stock was gang-battered on Monday, kicked brutally down the stairs by 13 per cent after management announced revenue growth of 14 per cent to $480m.

    Not bad – but not good enough for investors in a sector where the multiples for SaaS and IaaS providers range between 30 to 50 per cent.

    Worse – far, far worse – are its prospects for the future. Rackspace expects to grow between 1.5 and 2.5 per cent in the current, second quarter, which equates to revenues of between $487.4m and $492m.

    That’s not growth by cloud standards; that’s growth by the standards of Oracle and SAP, dated giants in the field of on-prem fighting to transition online.

    Compare Raskspace to the on-steroids growth of AWS; it’s tiny compared to Amazon but leader of the cloud pack – up 49 per cent to $1.57bn at last count.

    Clearly, AWS/Amazon and Rackspace are in different categories on cloud. Rackspace is older, having been founded in 1998, while AWS popped into life in 2006.

    Rackspace tried to regain the field by jointly leading OpenStack in 2010. Since then, however, the benefits of OpenStack have flowed to others: that is, consultants gluing the haystack together and customers eschewing public clouds for private. Neither discernibly helps Rackspace.

    Part of the secret of AWS’s growth has been its pricing – offering low up-front costs and promising to keep cutting prices, scooping up customers, in part, on price. Microsoft has joined battle with AWS on price, trying to match it.

    It’s a price race that Rackspace has declined to enter. Rackspace is trying to push a quality argument, saying it offers more than simple commodity infrastructure – and therefore it won’t be caught on price.

    Rackspace does have another one last option
    The opportunity is to “go leverage someone else’s product expertise and capital, and really differentiate on what makes us a specialist in our market.”
    Rackspace announced “fanatical support” for Microsoft’s Office 365 and Rackspace Managed Services for Office 365 last week.

    Reply
  15. Tomi Engdahl says:

    Simon Zekaria / Wall Street Journal:
    US data center giant Equinix buys European rival Telecity for $3.6B

    U.S. Data Giant Equinix Buys Telecity
    U.S. firm wins scramble to buy U.K.-based peer in a cash and share offer worth $3.60 billion
    http://www.wsj.com/article_email/data-giant-equinix-agrees-deal-to-acquire-telecity-1432885737-lMyQjAxMTE1MjI5OTUyNzk5Wj

    U.S. data center giant Equinix Inc. on Friday agreed to buy U.K.-based peer Telecity Group PLC in a cash and share offer worth £2.35 billion ($3.60 billion), squashing an earlier tie-up between Telecity and another European player, and underscoring the industry’s consolidation on the continent amid swelling demand for data and digital services.

    Companies including technology and telecommunications firms are increasingly outsourcing data management and information technology handling to operators such as Equinix and Telecity, where data storage space is rented by rack, cage or room. These smaller data centers are positioned near urban areas where millions of consumers are located, and therefore favored by companies for proximity and network speed.

    “There is a very sizable market opportunity here in Europe that companies are looking to exploit,”

    Reply
  16. Tomi Engdahl says:

    Cloud Foundry takes first steps into Azure
    Preview available no, beta real soon now, then …. wake me up when it’s live, okay?
    http://www.theregister.co.uk/2015/06/01/cloud_foundry_takes_first_steps_into_azure/

    Microsoft’s promise to make Azure Cloud Foundry-friendly has become concrete. A bit.

    The company’s announced a “public preview of open source Cloud Foundry for Microsoft Azure”.

    Cloud Foundry’s availability on Azure is a medium-sized deal, as it means those who chose to develop on the newly open-sourced platform can now deploy their workloads to Azure in addition to several other clouds. For Microsoft, that means Azure will become more attractive to quite a few developers.

    Reply
  17. Tomi Engdahl says:

    How to deliver apps with Azure RemoteApp
    http://www.theregister.co.uk/2015/06/02/microsoft_how_to_deliver_apps_with_azure_remoteapp/

    Step 1: Watch this webcast on how it’s done…

    Azure RemoteApp (ARA) is fresh out the door and Microsoft is keen to get this cloud service into your mitts.

    More about Azure RemoteApp

    Running an application on any device sounds like it should be a breeze, but – and it’s a big but – according to El Reg blogger and IT manager Adam Fowler, this “requires an all-encompassing solution from a reliable vendor, as well as rapidly deploying ways for all devices to access them”.

    Microsoft has a new horse, called Azure RemoteApp that combines Windows applications with Remote Desktop Services on Microsoft Azure. Azure RemoteApp is cloud-based – which means scalable, per-user monthly OPEX pricing – and slots into Microsoft’s desktop and application virtualisation portfolio.

    Reply
  18. Tomi Engdahl says:

    Walt Mossberg / Re/code:
    Google Photos Review: best photo backup-and-sync cloud service, but free only applies to images up to 16MP, and videos 1080p or less, more than enough for most — The New Google Photos: Free at Last, and Very Smart

    The New Google Photos: Free at Last, and Very Smart
    http://recode.net/2015/06/02/the-new-google-photos-free-at-last-and-very-smart/

    Last Thursday was liberation day for Google Photos, the search giant’s appealing service for storing pictures and videos in the cloud. It was uncoupled from Google’s widely ignored social network, Google+, where it had been effectively hidden. And it was upgraded with new features.

    Not only that, but Google gave Photos users free, unlimited storage for pictures and videos at the highest resolutions used by average smartphone owners. And it issued nearly identical versions of the shiny new standalone app across Android devices and Apple’s iPhones and iPads. There’s also a browser version for the Mac and Windows PCs.

    Once you’ve backed up your photo library to the service, all your photos and videos, including any new ones you take, are synced among all of these devices.

    I’ve been testing this new Google Photos for about a week, and despite a few drawbacks, I like it a lot. I consider it the best photo backup-and-sync cloud service I’ve tested — better than the leading competitors from Apple, Amazon, Dropbox and Microsoft.

    Google Photos was always good, but now it’s entirely outside of a social network.

    And when you do want to share them, you can totally ignore Google+ and easily and quickly post them to Facebook, Twitter and other networks, on both Android and iOS. You can email a link to a photo to someone, which works whether or not he or she has the Google Photos app.

    The coolest aspect of the new Google Photos is that once you click the search button — before you even type anything — the app presents you with groups of pictures organized by three categories: People, Places and Things.

    In the People section, Google collects all the photos containing faces it thinks are the same, without any work by you. It doesn’t identify these people, but just collects them for you for quick access. I found its guesses remarkably accurate.

    In the Places section, Google relies on geo-tagging where available. For older photos taken with cameras that lacked location tracking, it relies on known landmarks.

    But the Things section, while less accurate, is more impressive.

    My only real complaint with the People, Places and Things feature is that it’s not easy to find. You can only see it when you click the search button.

    As before, Google Photos automatically creates collages, animations, photo groups, panoramas and “stories” from photos it detects as being from the same place and time. You can choose whether to keep these in your library. As in the past, I generally found these pleasing and accurate.

    Downsides

    The new Google Photos does have a few flaws. The initial upload can be very slow, even on a fast Internet connection. For instance, it took nearly a week to upload my 36,000-image library from my iPhone.

    Also, the free-storage option only applies to pictures of 16 megapixels or less, and videos of 1080p or less. Larger items get compressed. These sizes are more than enough for most people, but photographers and hobbyists who want to store and sync larger, uncompressed items get just 15 gigabytes free, and that is shared with other Google services, like Gmail. For more storage, they have to pay from $2 to $200 a month for 100GB up to 20 terabytes of storage.

    Bottom Line

    The new Google Photos brings the company’s expertise in artificial intelligence, data mining and machine learning to bear on the task of storing, organizing and finding your photos. And that, combined with its cross-platform approach, makes it the best of breed.

    Reply
  19. Tomi Engdahl says:

    HP CloudSystem 9.0 includes Helion platform for private clouds
    CloudSystem 9.0 has OpenStack and Eucalyptus integrated
    http://www.theinquirer.net/inquirer/news/2411415/hp-cloudsystem-90-includes-helion-platform-for-private-clouds

    HP IS TO UPDATE its CloudSystem cloud-in-a-box solution by including the full Helion OpenStack and Helion Development Platform to deliver a comprehensive private cloud for customers that can bridge the legacy and cloud native worlds.

    It also now includes the Eucalyptus stack, which enables workloads from AWS to run on CloudSystem, HP said.

    Due for general availability in September, CloudSystem 9.0 is the latest incarnation of HP’s ready-made private cloud platform for enterprise customers or service providers, which can be delivered as just software or included as a complete package with HP infrastructure hardware.

    Reply
  20. Tomi Engdahl says:

    Piston to power Cisco Intercloud
    Borg goes shopping, again
    http://www.theregister.co.uk/2015/06/04/piston_to_power_cisco_intercloud/

    Cisco is either dipping another tentative toe in the open cloud business, or about to Borg a potential competitor, announcing that it wants to buy cloud operating system company Piston Cloud Technologies.

    Piston’s OpenStack expertise will now be wrapped into the warm embrace of Cisco’s Cloud Services operation, the group charged with building its global Intercloud network of clouds.

    The acquisition brings no mean cloud to the Borg: Piston was founded by Joshua McKenty, lead architect of the NASA cloud platform that was spun out to create the soon-to-be-acquired company.

    Piston lays claim to offering the first commercial OpenStack version, and focuses on automated deployment, security, and interoperability with other OpenStack public clouds.

    Reply
  21. Tomi Engdahl says:

    Docker death blow to PaaS? The fat lady isn’t singing just yet folks
    Could they work together? Yeah, why not
    http://www.theregister.co.uk/2015/06/01/did_docker_kill_paas/

    Logically nestled just above Infrastructure-as-a-Service and just beneath the Software-as-a-Service applications it seeks to support, we find Platform-as-a-Service (PaaS).

    As you would hope from any notion of a platform, PaaS comes with all the operating system, middleware, storage and networking intelligence we would want — but all done for the cloud.

    However, as good as it sounds, critics say PaaS has failed to deliver in the practical terms. PaaS offers a route to higher level programming with built-in functions spanning everything from application instrumentation to a degree of versioning awareness. So what’s not to like?

    Does PaaS offer to reduce complexity via abstraction so much that it fails through lack of fine-grain controls? Is PaaS so inherently focused on trying to emulate virtualised hardware it comes off too heavy on resource usage? Is PaaS just too passé in the face of Docker?

    Proponents of Docker say this highly popularised (let’s not deny it) containerisation technology is not just a passing fad and that its lighter-weight approach to handling still-emerging microservices will ensure its longer-term dominance over PaaS.

    Dockerites (we’ll call them that) advocate Docker’s additional level of abstraction that allows it to share cloud-based operating systems, or more accurately, system singular.

    This light resource requirement means the Docker engine can sit on top of a single instance of Linux, rather than a whole guest operating system for each virtual machine, as seen in PaaS.

    There’s great efficiency here if we do things “right” – in other words, a well-tuned Docker container shipload can, in theory, run more application instances on the same amount of base cloud data centre hardware.

    Ah, but is it all good news? Docker has management tool issues according to naysayers. Plus, Docker is capable of practically breaking monitoring systems, so say the IT monitoring tools companies. But then they would say that wouldn’t they?

    The big question is: does Docker isolation granularity and resource consolidation utilisation come at the expense of management tool-ability? “Yes it might do, in some deployment scenarios,” is probably the most sensible answer here.

    “In the case of PaaS you don’t have much control over many of the operational aspects associated with managing your application, for example the way it handles scaling, high availability, performance, monitoring, logging, updates. There is also a much stronger dependency on the platform provider in the choice of language and stack,” said Nati Shalom, CTO and founder of cloud middleware company GigaSpace.

    So does Docker effectively replace PaaS or does Docker just drive the development of a new kind of PaaS with more container empathy and greater application agnosticism?

    PaaS has been criticised for forcing an “opinionated architecture” down on the way cloud applications are packaged, deployed and managed. Surely we should just use Docker, but with an appropriate level of orchestration control too right? It’s not going to be that simple is it?

    “Yes, it can be that simple,” argues Brent Smithurst, vice president of product at cross-platform development tools company ActiveState.

    “Containers, including Docker, are an essential building block of PaaS. Also, PaaS offers additional benefits beyond application packaging and deployment, including service provisioning and binding, application monitoring and logging, automatic scaling, versioning and rollbacks, and provisioning across cloud availability zones,” he added.

    It seems clear that it would be unfair and unwise to rank the usage of Docker over PaaS (or vice versa) per se; the two are closely related but not mutually exclusive. In very basic terms, if you use Docker, you should be using PaaS too.

    Reply
  22. Tomi Engdahl says:

    Rackspace sees future in cloud mgmt services – even on others’ setups
    CTO: We might have run your AWS or Azure setup
    http://www.theregister.co.uk/2015/06/04/rackspace_sees_future_in_cloud_management_services_even_perhaps_on_thirdparty_clouds/

    Rackspace Solve London Speaking at the Rackspace Solve event in London, the company’s CTO John Engates said an increased focus on cloud management services rather than just infrastructure might extend to running applications on third-party clouds.

    Rackspace is already rolling out what it calls Cloud Office: Microsoft Office 365, or Google Apps for Work, purchased through and supported by Rackspace.

    “What about Rackspace services on top of other public clouds?” said Engates. “That’s something that we’re looking at. We’re talking to customers about what they would like us to do in that regard. There is no customer that is pure any more; every customer we talk to has a little bit of infrastructure on their own data centre floor, they have some in Rackspace cloud, they have some software as a service providers and they have some maybe at Amazon or Microsoft Azure.”

    Rackspace also has agreements with some customers where it manages private cloud setups for them in on-premises data centres.

    Rackspace is not retreating from hosting cloud services directly – the company is opening a new state-of-the-art data centre in Crawley, near London, for example – but the change in emphasis does reflect the difficulty of competing with the scale of the largest cloud providers as well as what Rackspace sees as an opportunity in management services.

    Companies such as Amazon, Google and Microsoft have more resources to invest in cloud infrastructure.

    Reply
  23. Tomi Engdahl says:

    HMRC ditches Microsoft for Google, sends data offshore
    Tax doesn’t have to be taxing for US firm with inside track
    http://www.theregister.co.uk/2015/06/05/hmrc_is_going_google/

    Her Majesty’s Revenue and Customs (HMRC) is the first major department to move to Google Apps, part of an apparent loosening of Microsoft’s stranglehold on the government’s software services.

    The department will join the Cabinet Office and Department for Culture, Media and Sport (DCMS) in deploying the fluffy white stuff.

    The Cabinet Office currently has 2,500 users on Gmail. The government said in March the Google Apps suit best met the user needs for the Cabinet Office and DCMS.

    “Other solutions (e.g Microsoft 365) also scored highly, but the advanced collaboration and flexible working features of Google Apps were the best fit for our needs,” it said at the time.

    David Fitton, head of public sector sales for Google UK wrote on Linkedin:

    “The acceptance by HMRC that they can store OFFICIAL information offshore in Google data-centres represents a major change and endorsement of Google’s approach to managing sensitive information.

    If other departments follow suit, this could spell the beginning of the end for Microsoft’s monopoly on public sector software services, but rivals shouldn’t hold their breath.

    Last week the UK government decided it could do without extended support for Windows XP.
    However, not all departments and agencies have yet moved from the out-of-support OS.

    Reply
  24. Tomi Engdahl says:

    Puppet Enterprise 3.8 is Now Available
    https://puppetlabs.com/blog/puppet-enterprise-3.8-now-available?ls=content-syndication&ccn=Techmeme-20150520&cid=701G0000000F68e

    Puppet Enterprise 3.8 is here! The release includes powerful new provisioning capabilities for Docker containers, AWS infrastructure and bare-metal environments. In addition, we’ve introduced Puppet Code Manager, a new app for Puppet Enterprise to accelerate deployment of infrastructure changes in a more programmatic and testable fashion.

    Puppet Enterprise 3.8 introduces major enhancements to the Puppet Node Manager app first introduced with version 3.7. New capabilities help significantly accelerate provisioning time across heterogeneous environments and get you to Day 2 operations faster.

    Containers

    A new Puppet Supported module for Docker helps you easily launch and manage Docker containers within your Puppet-managed infrastructure. These new capabilities help avoid configuration issues with the Docker daemon running containers, so teams can spend less time troubleshooting configuration issues, and more time helping to develop and deploy great applications.
    Cloud environments

    Our new Amazon Web Services module makes it easy to provision, configure and manage AWS resources, including EC2, Elastic Load Balancing, Auto Scaling, Virtual Private Cloud, Security Groups and Route53.

    Reply
  25. Tomi Engdahl says:

    Intercloud extends management to AWS VMs
    Cloud to get app-like with partner marketplace
    http://www.theregister.co.uk/2015/06/11/intercloud_extends_management_to_aws_vms/

    Cisco’s decided to Borg Microsoft and Amazon services into its Intercloud, in the latest round of Intercloud Fabric software releases.

    On-boarding for AWS VMs lets companies identify machines already on the Amazon cloud, and put them under Intercloud Fabric control; and Intercloud’s zone-based firewall has wrapped its arms around Microsoft Azure.

    Cisco also launched a club of 35 partners announced as part of the Cisco Intercloud Marketplace, in what the company says will simplify hybrid cloud infrastructure.

    There’s also additional security measures, increased manageability, and the support for additional hypervisors.

    The company will offer its hybrid cloud services via the Cisco Intercloud Marketplace, slated for introduction in the third quarter of this year. The Marketplace is intended as a global storefront for Intercloud-based applications and cloud services from Cisco and its partners.

    Reply
  26. Tomi Engdahl says:

    Hybrid cloud: Define what it is, then decide what you want
    Choose a provider carefully, think about what you need
    http://www.theregister.co.uk/2015/06/03/juggling_different_clouds_hybrid/

    First, there was software as a service, infrastructure and then platform as a service, then public and private cloud, and today hybrid cloud — but is the latter vendor-driven cloud washing or something more?

    Lending credence to the latter is the fact EMC last week spent a juicy $1.2bn buying Virtustream to increase its presence in hybrid cloud. In a field vulnerable to conflation and hype, though, what exactly does hybred cloud mean — how do you do “it”?

    Hybrid cloud, in my world at any rate, is where you have your own on-premise services and cloud-based services and they integrate with each other in some way.

    If you have some on-prem stuff and some cloud stuff and they do different things, then frankly you’re not trying. They’ve got to work together for it to count.

    Come to think of it, though, if you have a cloud setup where you use two different providers for an integration operation, I’d be happy to let you think of that as “hybrid” too – you’re having to do integration between a cloud provider and somewhere else, after all.

    Which components exist where?

    The on-premise kit is pretty obvious: you’ll have applications sitting on physical servers, with the physical servers connected to some storage. If you’ve got any sense you’ll have an additional layer of virtual servers between the apps and the physical servers, because unless what you do is very niche indeed this is the best way to get the most from your physical kit.

    In the cloud you only have up to three of these layers: the obvious one is the applications, but these will either: (a) be presented to you in a SaaS offering (so you see nothing of the servers they sit on); or (b) sit upon virtual server and storage technology over which you have management control.

    In a SaaS offering there’s not really much scope for hybrid operation – it’s where you control the underlying infrastructure that you can do funky hybrid stuff.

    At which level do I integrate?

    The key question is: what do I want to do with hybrid cloud? The three I tend mostly to come across are:

    On-premise systems with cloud for disaster recovery (“DR”)
    Public-facing services back-ended by on-premise systems
    Cloud-based, heavy duty processing reporting back to on-premise kit

    Each of these has its own nuances, so let’s look at the options in turn.

    Reply
  27. Tomi Engdahl says:

    You need one cloud computing provider – and this is why
    Telstra sets out stall for procurement consolidation
    http://www.theregister.co.uk/2015/06/12/cloud_service_provider_procurement_strategy/

    Around the turn of the new century some of the larger businesses started to look at the IT systems they had in place, and started to consider if they were strictly core to the business. Did they really need a data centre? While a brave few started to outsource some of their equipment, most decided that yes they did need a data centre.

    The equation was simple: either choose a fast internal system built on a corporate data centre, where costs reduced as the data centre scaled or outsource to a third-party at roughly the same cost. But you’d still need your IT teams, it wouldn’t scale, and you had to add in the cost of a very expensive data connection.

    Then along came VMware, a surplus of fibre from the dotcom crash and the equation changed forever. Virtualisation was the catalyst for change; it allowed managed hosting on a scale previously unknown, at a price that made it a no-brainer. Plus the availability of cheap fibre from the dotcom bubble roll-out and the availability of wavelength-division multiplexing meant the wholesale price of data traffic collapsed.

    The equation was now one-sided, data centres were expensive, and hosting was cheap. Businesses virtualised all of their servers and pretty soon their data-halls full of standalone servers turned into a few blade racks in a cupboard, which could work just as well if they were in Slough, Sunderland or Silicon Valley.

    Gradually businesses moved their servers and data to outsourced hosts, and started to shrink their IT teams to a few people, managing a few hosting relationships and the remains of the servers they couldn’t virtualise. Then as the man-years of expertise in maintaining IT systems disappeared to the job centre businesses went one step further and gradually moved from remote hosting to remote managed hosting.

    That’s how it would have stayed … However, in parallel with the build-up of managed hosting and virtually free data traffic, entrepreneurs in Silicon Valley started to develop new cloud services.

    After much harrumphing over security IT departments started to stick their toe in the water and try cloud out.

    The result of this almost viral cloud take-up is a cloud-mageddon headache for the IT departments. According to Q1 2015 figures from Skyhigh Networks Cloud Adoption and Risk Report the average business has over 900 cloud applications.

    Why can’t there be only one?

    A September 2014 survey of 675 IT decision makers from multinational organisations commissioned by Telstra found that the majority of businesses have purchased offerings from multiple vendors, resulting in a complex cloud environment that may be hindering their agility and speed to market.

    When asked if they were happy with the multiple-vendors the majority (73 per cent) of those polled understandably said ‘no’ and that actually what they would prefer was a single committed cloud partner, either to manage the relationships with others or as a single provider.

    But here’s the problem. If they choose one supplier they’re locked-in, and they may not have the best solution to the problem; if they maintain the current approach there’s just more cloud-mageddon.

    “Businesses need a single solution that can provide cloud consultancy, a service management wrap, the network – and the management of that network – and services delivered over the top. These four areas are where we see our customers needing help and it’s where we feel we certainly have a big role to play,” Bishop concluded.

    Hard data on the top cloud services and their risks
    https://www.skyhighnetworks.com/cloud-report/

    Reply
  28. Tomi Engdahl says:

    Hypercovergence isn’t about hardware: it’s server-makers becoming software companies
    Clouds sell compute by the glass. On-premises kitmakers want to sell wine-as-a-service
    http://www.theregister.co.uk/2015/06/15/hypercovergence_isnt_about_hardware_its_servermakers_becoming_software_companies/

    Public cloud is supposed to be a mortal threat to enterprise hardware vendors, whose wares look clunky and costly compared to a servers-for-an-hour-for-cents cloud and the threat looks scary … until you actually use a public cloud for a while.

    The Reg increasingly hears that the cost of operating in a public cloud quickly adds up to sums that make on-premises kit look decently-priced. Communications costs to and from public clouds can quickly reach the same level as compute costs, which rather dents the servers-for-cents story. Compute itself isn’t cheap, either, once you do lots of it. Nor is storage. Then there’s the other stuff needed to run real workloads, like firewalls, load balancers, WAN optimisation and so on. They all cost cents-per-hour, too and once you’re not just doing test and dev in the public cloud you need them all. Those cents-per-hour add up.

    So while public cloud does have a very low sticker price, once you work at a decent scale operational costs soon get pretty close to on-premises levels.

    The gap comes from the fact that public clouds do excuse you from cabling, powering and cooling kit, fixing stuck fans, swapping out dead disks and a zillion other pieces of dull meatspace sysadminnery. Cloud also has massive redundancy that on-premises kit can’t match and elasticity that is tough to replicate in your own bit barn.

    Roll out the barrel

    The folks at Nutanix have come up with an interesting analogy to describe this situation. They liken the public cloud to a restaurant or bar where you can buy wine by the glass. Even if you really like the wine, going out for a glass every night makes no sense. It’s more sensible to buy a case you can drink at home. You’ll get the same wine, at a lower price, with more convenience and comfort.

    Nutanix’s argument is based on the company’s belief that its kit does most of the elasticity and redundancy of a public cloud. It’s far from alone in the belief it delivers a cloud-like experience – all enterprise hardware makers are trying to deliver a simpler, more reliable experience with easy scalability – so the wine-by-the-glass/wine-by-the-case analogy works for plenty of vendors.

    Reply
  29. Tomi Engdahl says:

    Devs to pour Java into Amazon’s cloud after AWS Lambda update
    Event-driven model not just for JavaScript anymore
    http://www.theregister.co.uk/2015/06/16/aws_lambda_java_support/

    Amazon Web Services has expanded its AWS Lambda programming model to support functions written in Java, the cloud kingpin said on Monday.

    Lambda, which allows developers to run event-driven code directly on Amazon’s cloud without managing any application infrastructure, launched in November 2014 and initially only supported code written in JavaScript and Node.js.

    With Monday’s update, developers can now write their event handlers in Java 8, provided they code them in a stateless style that doesn’t make any assumptions about the underlying infrastructure.

    “We have had many requests for this and the team is thrilled to be able to respond,” AWS chief evangelist Jeff Barr said in a blog post.

    AWS Lambda functions can be invoked automatically whenever a variety of events take place in Amazon’s cloud. So, for example, you could set a function to be triggered whenever a certain Amazon Simple Storage Service (S3) storage bucket is modified, or to watch for events from the Kinesis data-processing service.

    Lambda functions can also be used as back ends for mobile applications that store data on the AWS cloud.

    Lambda functions written in Java can use any Java 8 features and can even invoke Java libraries. The handler code and any necessary JAR files are bundled up into a JAR or ZIP file for deployment on AWS.

    To make life easier for developers, Amazon has released AWS Toolkit plugin for Eclipse that takes care of packaging and uploading handlers.

    Reply
  30. Tomi Engdahl says:

    Public, Private, Hybrid? Choosing the Right Cloud Mix
    http://www.cio.com/article/2936939/hybrid-cloud/public-private-hybrid-choosing-the-right-cloud-mix.html

    Each model offers its own advantages – and tradeoffs. Here’s what you need to consider.

    IDG’s annual cloud survey, which polled more than 1,600 IT managers, found that 39 percent or organizations are using a mix of cloud models. About 60 percent have at least some enterprise applications hosted in a public cloud environment while nearly the same proportion (57 percent) said they were using a private cloud. About one in five are using a hybrid cloud.

    Despite differing safety, control and cost considerations between public and private cloud models, the growth in adoption of the two models is almost identical, according to the IDG survey.

    A public cloud is a great option if you are looking to offload some of the costs and management involved in running standardized applications and workloads such as email, collaboration and communications, CRM and web applications. In some cases, it is also a good option for application development and testing purposes. Many companies have also begun moving big data workloads to the public cloud because of the enormous scalability benefits.

    But there are some major caveats when using a public cloud. Your applications are hosted on an infrastructure that is shared by many other organizations.

    A private cloud model addresses many of these concerns. Because your applications and workloads are hosted on a dedicated infrastructure you have much more control over it. In many cases, a private cloud is enabled on existing enterprise hardware and software using virtualization technologies.

    Many companies use a private cloud model for proprietary workloads such as ERP, business analytics and HR applications

    Best of Both Worlds

    A hybrid approach combines the best of both cloud worlds by allowing organizations to tap the scalability and cost efficiencies of a public cloud while keeping core applications or data center components under enterprise control.

    Reply
  31. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Google launches Container Engine in beta, makes Container Registry generally available
    http://venturebeat.com/2015/06/22/google-launches-container-engine-in-beta-makes-container-registry-generally-available/

    Google today broadened the availability of a couple of its cloud services for working with applications packaged up in containers. The Google Container Engine for deploying and managing containers on Google’s cloud infrastructure, until now available in alpha, is now in beta. And the Google Container Registry for privately storing Docker container images, previously in beta, is now generally available.

    Google has made a few tweaks to Container Engine, which relies on the Google-led Kubernetes open-source container management software, which can deploy containers onto multiple public clouds. For one thing, now Google will only update the version of Kubernetes running inside of Container Engine when you run a command. And you can turn on Google Cloud Logging to track the activity of a cluster “with a single checkbox,” Google product manager Eric Han wrote in a blog post on the news.

    Google has repeatedly pointed out that for years it has run internal applications inside containers, rather than more traditional virtual machines. And while Kubernetes runs just fine on any infrastructure, Google cloud executive Craig McLuckie last year told VentureBeat that “it works extremely well on the Google Cloud Platform.”

    The big picture here is that Google aspires to become even more of a player in the public cloud market than it is now. Solid tools for storing images and deploying apps in containers can help Google in this regard.

    Meanwhile, other leading cloud providers, such as Microsoft, IBM, and Amazon Web Services, have been executing on their container strategies too.

    Reply
  32. Tomi Engdahl says:

    Review: Microsoft Azure beats Amazon and Google for mobile development
    http://www.infoworld.com/article/2890167/application-development/review-microsoft-azure-beats-amazon-and-google-for-mobile-development.html

    Easier than Amazon’s Mobile SDK and more complete than Google’s Firebase, Azure Mobile Services has more of what developers need

    Reply
  33. Tomi Engdahl says:

    Alex Wilhelm / TechCrunch:
    Box to offer integration with IBM content management and security tools as part of a wide-ranging cloud partnership — Box And IBM Ink Wide-Ranging Cloud Partnership — IBM is going Levie. — This evening, Box and IBM announced a partnership that will see their technologies integrated, and their cloud products commingled.

    Box And IBM Ink Wide-Ranging Cloud Partnership
    http://techcrunch.com/2015/06/23/box-and-ibm-ink-wide-ranging-cloud-partnership/

    IBM is going Levie.

    This evening, Box and IBM announced a partnership that will see their technologies integrated, and their cloud products commingled. As part of the arrangement, Box will also offer its customers the ability to store their data on IBM’s cloud, which will have — I checked with the firm — 46 data centers around the world by the end of the year.

    The deal has a number of facets, including the integration of Box with IBM’s content management technology, the application of IBM data tools to information stored by Box, use of IBM security tech by Box, and a set of promised mobile applications building on the tech of both firms.

    Reply
  34. Tomi Engdahl says:

    Caroline O’Donovan / BuzzFeed:
    Amazon to double the commission it charges task requesters per gig on Mechanical Turk from 10% to 20% and charge an additional 20% on larger batch jobs

    Changes To Amazon’s Mechanical Turk Platform Could Cost Workers
    http://www.buzzfeed.com/carolineodonovan/changes-to-amazons-mechanical-turk-platform-could-cost-worke#.lpxZL2Rr1J

    Amazon will take start taking a bigger cut of payouts on Mechanical Turk, its digital crowdwork platform — a move that might drive out both the workers and the people who pay them.

    Reply
  35. Tomi Engdahl says:

    Caroline O’Donovan / BuzzFeed:
    Amazon to double the commission it charges task requesters per gig on Mechanical Turk from 10% to 20% and charge an additional 20% on larger batch jobs

    Changes To Amazon’s Mechanical Turk Platform Could Cost Workers

    Amazon will take start taking a bigger cut of payouts on Mechanical Turk, its digital crowdwork platform — a move that might drive out both the workers and the people who pay them.

    Reply
  36. Tomi Engdahl says:

    Bank of England CIO: ‘Beware of the cloud, beware of vendors’
    Old Lady grumbles about new thingy
    http://www.theregister.co.uk/2015/06/25/bank_of_england_no_public_cloud/

    The Bank of England is loosening up on IT delivery and recruitment, but not its resistance to public cloud.

    John Finch, CIO of the UK’s central bank since September 2013, Wednesday ruled out the use of any public cloud by the bank for the foreseeable future.

    Cloud has however crept into the Bank’s IT margins, where it’s been working with firms on the new plastic bank notes that debuted in March from Clydesdale Bank.

    “One area where it’s changed, is we have to share details on design of the new bank node with people who make the machines that process them — we have built a hybrid private cloud for them to connect to, so at the margins of what we do,” he conceded.

    However, speaking at the Cloud World Forum in London, Finch ruled out any role for cloud in the Bank’s core IT systems and infrastructure, reiterating an announcement first made in 2014.

    But, Finch estimates if the reasons for going cloud is to save money, you shouldn’t go to the cloud. “Beware of the cloud and beware of the vendors,” Finch warned. “All those messages I gave a year ago, I passionately believe.”

    “Make sure you understand where your data resides, make sure you understand the details of your contract, make sure you understand the security, and make sure you stay in control,” he said.

    The bank’s IT hiring policy is also striving for greater diversity – by age, sex and ethnicity – incorporating new graduated recruitment and school-leaver apprenticeship programs. In the past, he joked, to get a Bank of England job you’d need to have a first from Oxford or Cambridge, or to have been very bright at Imperial College London, and male.

    “Particularly in technology we want to recruit people who we wouldn’t normally recruit – specky, geeky kids hacking in their bedroom,” he said. The philosophy is fresh thinking and ideas will flow from diversity and cause disruptive change for the Bank.

    Reply
  37. Tomi Engdahl says:

    FeedHenry now Red Hat Mobile App Platform, gets OpenShift cloud integration
    Company makes play for mobile app devs
    http://www.theregister.co.uk/2015/06/25/feedhenry_becomes_red_hat_mobile_app_platform_gets_openshift_cloud_integration/

    Red Hat has launched its Mobile Application Platform, at the company’s Summit under way in Boston.

    The Mobile Application Platform consists of tools and templates for building mobile applications combined with back-end services to handle features including authentication, data, and integration with existing systems. It is based on FeedHenry, which Red Hat acquired in October 2014.

    Supported mobile platforms include iOS, Android, Windows Phone and Apache Cordova.

    Red Hat supports the native SDKs for these platforms as well as popular tools from companies including Xamarin, Sencha and Appcelerator. A hosted build farm provides builds for iOS, Android and Windows Phone. The server platform is based on node.js.

    Red Hat has added three things to the platform, on top of what it acquired from FeedHenry. The first is integration with OpenShift, Red Hat’s public cloud application platform. OpenShift customers now have immediate access to the Mobile Application Platform.

    Second, there are new node.js adapters to integrate with JBoss, Red Hat’s Java middleware product.

    Third, the company has added a push notification service from Aerogear, another Red Hat project.

    “We charge by utilisation of the cloud so it can be as little as $1,000 a month to as much as $30,000 or more.”

    Reply
  38. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Google has quietly launched a GitHub competitor, Cloud Source Repositories — Google hasn’t announced it yet, but the company earlier this year started offering free beta access to Cloud Source Repositories, a new service for storing and editing code on the ever-expanding Google Cloud Platform.

    Google has quietly launched a GitHub competitor, Cloud Source Repositories
    http://venturebeat.com/2015/06/24/google-has-quietly-launched-a-github-competitor-source-code-repositories/

    Google hasn’t announced it yet, but the company earlier this year started offering free beta access to Cloud Source Repositories, a new service for storing and editing code on the ever-expanding Google Cloud Platform.

    It won’t be easy for Google to quickly steal business from source code repository hosting companies like GitHub and Atlassian (with Bitbucket). And sure enough, Google is taking a gradual approach with the new service: It can serve as a “remote” for Git repositories sitting elsewhere on the Internet or locally.

    Reply
  39. Tomi Engdahl says:

    Put Your Enterprise Financial Data In the Cloud? Sure, Why Not
    http://it.slashdot.org/story/15/06/26/0052217/put-your-enterprise-financial-data-in-the-cloud-sure-why-not

    For many, the idea of storing sensitive financial and other data in the cloud seems insane, especially considering the regulatory aspects that mandate how that data is protected. But more and more organizations are doing so as cloud providers start presenting offerings that fulfill regulatory need

    Enterprise financials in the cloud? Why the fog of skepticism may be lifting
    http://www.itworld.com/article/2939472/enterprise-software/enterprise-financials-in-the-cloud-why-the-fog-of-skepticism-may-be-lifting.html

    Spreadsheets and email documents are a bigger threat than the cloud, says Forrester Research’s Liz Herbert

    San Diego — The corporate accounting department is the last place that I expected to see cloud computing. Thoughts of “fiduciary responsibility” and “Sarbanes-Oxley” and “HIPAA” and “PCI compliance” float through my mind as Insight Software talked up its new cloud-based offerings at its HubbleUp 2015 user conference, held here in mid June. Attendees, largely from financial departments at large companies, lapped it up.

    Insight Software is a well-known maker of reporting, analytics and planning software that integrates tightly with big ERP (enterprise resource planning) financial packages such as JD Edwards, Oracle eBusiness Suite and SAP. Traditionally, ERP packages and add-ons like Insight’s tools run entirely on-premises. The latest version of Insight’s software, rebranded as Hubble, is also available as a cloud-based SaaS offering.

    As a business owner myself, this is scary. My financials? My budgets, my projections, my variance reports, my P&L statements, in the cloud? Exposed? If something bad happens, who is going to own the decision to place this critical data outside the firewall? Who will explain the incident to the shareholders, the Securities and Exchange Commission, the Wall Street Journal?

    Turns out that while some organizations are perhaps moving slowly to put critical information like financials into the cloud, when you get outside the technology-analyst fog of skepticism, there’s a lot more optimism than expected.

    Herbert and others talking about financials-in-the-cloud at HubbleUp made some other strong arguments in favor of trusting the security model:

    Large cloud vendors, such as Amazon, Google, Microsoft Azure, Rackspace are focused full-time on keeping up with the latest standards, regulatory issues, and compliance concern. Many enterprise data center managers are not. (Robertson told me that Insight uses Amazon Web Services to host the Hubble cloud offering.)

    Many hardware, operating systems, and applications in an enterprise data centers are common off-the-shelf (COTS) systems, and hackers are working overtime to break them. Most cloud providers are running customer platforms, and hackers have less access to them, and thus have fewer opportunities to discover vulnerabilities.

    When a vulnerability is found in a cloud system, the service provider can patch it immediately. When a vulnerability is found in COTS systems installed in the enterprise data center, the provider must develop a patch, distribute the patch to clients, and then clients must test then install that patch correctly. That’s a much slower process, with no guarantee that all data centers will even install the patch right away.

    Despite those technical concerns about hackers, the biggest worry among the financial executives attending HubbleUp was about the proliferation of thousands of spreadsheets across their organizations.

    Reply
  40. Tomi Engdahl says:

    Secure Server Deployments in Hostile Territory
    http://www.linuxjournal.com/content/secure-server-deployments-hostile-territory

    Would you change what you said on the phone, if you knew someone malicious was listening?

    Although I always have tried to build secure environments, EC2 presents a number of additional challenges both to your fault-tolerance systems and your overall security. Deploying a server on EC2 is like dropping it out of a helicopter behind enemy lines without so much as an IP address.

    In this article, I discuss some of the techniques I use to secure servers when they are in hostile territory. Although some of these techniques are specific to EC2, most are adaptable to just about any environment.

    Reply
  41. Tomi Engdahl says:

    Secure Server Deployments in Hostile Territory
    http://www.linuxjournal.com/content/secure-server-deployments-hostile-territory

    Would you change what you said on the phone, if you knew someone malicious was listening? Whether or not you view the NSA as malicious, I imagine that after reading the NSA coverage on Linux Journal, some of you found yourselves modifying your behavior. The same thing happened to me when I started deploying servers into a public cloud (EC2 in my case).

    Although I always have tried to build secure environments, EC2 presents a number of additional challenges both to your fault-tolerance systems and your overall security. Deploying a server on EC2 is like dropping it out of a helicopter behind enemy lines without so much as an IP address.

    In this article, I discuss some of the techniques I use to secure servers when they are in hostile territory. Although some of these techniques are specific to EC2, most are adaptable to just about any environment.

    So, what makes EC2 so hostile anyway? When you secure servers in a traditional environment, you may find yourself operating under a few assumptions. First, you likely assume that the external network is the main threat and that your internal network is pretty safe. You also typically assume that you control the server and network hardware, and if you use virtualization, the hypervisor as well. If you use virtualization, you probably also assume that other companies aren’t sharing your hardware, and you probably never would think it is possible that a malicious user might share your virtualization platform with you.

    In EC2, all of those assumptions are false. The internal and external network should be treated as potentially hostile.

    EC2 Security Groups can be thought of in some ways like a VLAN in a traditional network. With Security Groups, you can create firewall settings to block incoming traffic to specific ports for all servers that are members of a specific group

    I generally use Security Groups like most people might use VLANs only with some changes. Every group of servers that share a common purpose have their own Security Group.

    For instance, I might use changes to the default Security Group to allow all servers to talk to my Puppetmaster server on its custom port. As another example, I use a VPN to access my cloud network, and that VPN is granted access to SSH into all of the servers in my environment.

    Finally, I never store a secret in my userdata file. Often when you spawn a server in EC2, you provide the server with a userdata file. A number of AMIs (Amazon Machine Images—the OS install image you choose) are configured to execute the userdata script.

    configure my configuration management system (Puppet) and from that point on let it take over the configuration of the system

    Handling Secrets

    It’s incredibly important to think about how you manage secrets in a cloud environment beyond just the userdata script. The fact is, despite your best efforts, you still often will need to store a private key or password in plain text somewhere on the system. As I mentioned, I use Puppet for configuration management of my systems. I store all of my Puppet configuration within Git to keep track of changes and provide an audit trail if I ever need it. Having all of your configuration in Git is a great practice, but the first security practice I recommend with respect to secrets is to avoid storing any plain-text secrets in your configuration management system. Whenever possible, I try to generate secrets on the hosts that need them, so that means instead of pushing up a GPG or SSH key pair to a server, I use my configuration management system to generate one on the host itself.

    Reply
  42. Tomi Engdahl says:

    Amazon Web Services expands in India, plans to open new infrastructure hub in 2016
    http://www.geekwire.com/2015/amazon-web-services-expands-in-india-plans-to-open-new-infrastructure-hub-in-2016/

    Amazon Web Services is placing a big bet in India, announcing today that it plans to open a new “infrastructure region” in the world’s second most populous country sometime in 2016.

    “Tens of thousands of customers in India are using AWS from one of AWS’s eleven global infrastructure regions outside of India,” said Andy Jassy, senior vice president of AWS, in a press release.

    “We see huge potential in the Indian economy and for the growth of e-commerce in India,” Bezos said at the time. “With this additional investment of US $2 billion, our team can continue to think big, innovate, and raise the bar for customers in India.”

    AWS already boasts a number of customers in India

    Reply
  43. Tomi Engdahl says:

    Cloud-Based Backup and Disaster Recovery Is a Win-Win for Business
    http://www.cio.com/article/2942073/disaster-recovery/cloud-based-backup-and-disaster-recovery-is-a-win-win-for-business.html

    New models present a compelling alternative for business continuity

    The cloud is pretty much a win-win when it comes to business continuity. First, a cloud service structurally is a mesh of redundant resources scattered across the globe. If one resource should become unavailable, requests re-route to another available site. So from a high-availability standpoint, everyone benefits.

    That’s why classes of “as a service” models are emerging for backup and recovery. Backup as a service (BaaS) and disaster recovery as a service (DRaaS) resonate particularly well with smaller, growing businesses that may not have the budgets for the equipment and real estate required to provide hot, warm, or even cold backup facilities and disaster recovery sites. The cloud itself becomes “the other site” – and you only pay for the “facilities” when you use them because of the cloud’s inherent usage-based pricing model.

    The global DRaaS market is forecast to grow by 36 percent annually from 2014 to 2022, according to Transparency Market Research. Cloud-based backup and DR makes it easy to retrieve files and application data if your data center or individual servers become unavailable. Using the cloud alleviates the threat of damage to or theft of a physical storage medium, and there’s no need to store disks and tape drives in a separate site.

    Cloud-based disaster recovery services eliminate the need for site-to-site replication

    Reply
  44. Tomi Engdahl says:

    Leena Rao / Fortune:
    Profile of Amazon SVP Andy Jassy, who helped AWS dominate cloud-computing services — How Andy Jassy helped Amazon own the cloud — Major League baseball uses Amazon’s cloud to send real-time updates of player statistics to fans in all 30 of its stadiums.

    How Andy Jassy helped Amazon own the cloud
    http://fortune.com/2015/06/28/andy-jassy-amazon-web-services/

    Jassy helped Amazon Web Services dominate cloud-computing services; now he’s defending AWS’s crown against tough competition.

    While the world knows Amazon as an e-commerce steamroller, its cloud-computing division, Amazon Web Services (AWS), is now just as dominant in its own field, with almost three times the market share of its nearest competitor. AWS is a ­behind-the-scenes partner for more than 1 million customers, from tiny mom-and-pop shops to Fortune 500 leviathans, providing online infrastructure to support their websites, applications, inventory management, and databases. And since its inception 12 years ago, AWS has been shaped, led, and sold to customers by Jassy, a 47-year-old transplanted New Yorker who’s been at Amazon since he finished his Harvard MBA.

    The business world learned just how big AWS was in April, when Amazon for the first time broke out its numbers in a quarterly earnings report. AWS was not only profitable, Amazon said, but on track to earn $6.3 billion in revenue in 2015.

    Jassy, Amazon’s senior vice president of web services, came to Seattle and joined the company in 1997. His background was in marketing and business development, not engineering, but Rick Dalzell, Amazon’s chief information officer at the time, says Jassy exhibited some promising traits, including a photographic memory and a passionate competitive streak

    To keep its e-commerce market­place running smoothly, the company was constantly building new data centers.

    Jassy envisioned that Amazon could share its know-how and infrastructure with other businesses over the web, managing computing power for them so they could keep costs down—­a concept now known as a “public cloud” model. Inspired, he labored over a pitch memo to convince Bezos and Amazon’s board that the company could build a business around this idea.

    Reply
  45. Tomi Engdahl says:

    PowerShell for Office 365 powers on
    Web-based CLI is yours for the scripting
    http://www.theregister.co.uk/2015/07/01/powershell_for_office_365_powers_on/

    Microsoft has powered on PowerShell for Office 365.

    Redmond promised the tool back at its Ignite conference, and on Tuesday decided all was ready to take it into production.

    Anyone familiar with PowerShell probably won’t be in the slightest bit shocked by the tool, which offers a command line interface with which one can initiate and automate all manner of actions. Redmond’s created a script library to help you do things like add users, control licences or stop people from recording Skype meetings.

    There’s a few hoops through which to jump before you can start having that kind of fun

    PowerShell is found in just about every Windows admin’s toolbox, so bringing it to Office 365 looks like a very sensible decision by Microsoft as it keeps an important constituency happy. It should also make the cloudy suite easier to operate, therefore keeping costs low.

    Reply
  46. Tomi Engdahl says:

    Want to spoil your favourite storage vendor’s day? Buy cloud
    Leaving the premises might just work
    http://www.theregister.co.uk/2015/07/01/cloud_as_secondary_storage_on_premises/

    Organisations continue to buy storage. In fact, I was talking to a storage salesman not so long ago who was telling me that one of his customers regularly calls asking for a quote for “a couple more petabytes.”

    However, on-premise storage is not the end of the story. Yes, you need to have storage electronically close (with minimal latency) to your servers, but procuring on-premise storage needs more than cash. It needs power, space and support.

    You can’t keep buying more and more storage because power and data centre space are extremely limited.

    And, even if your data centre does have the space, you often can’t get the new cabinets next to your existing ones so you end up dotting your kit all over the building (with the interconnect fun that implies).

    If part of your data storage requirement can live with being offline then you have the option of writing it to tape – which, in turn, brings the problem of managing a cabinet or two full of tapes.

    Leaving aside the fact that they degrade over time if not kept properly, there’s always the issue with tape technology marching on (which means you have to hang onto your old tape drives and keep them working, just in case).

    Throw it somewhere else?

    So is there mileage in putting your data somewhere else – specifically in the cloud? In a word, “yes”. To take just one of many possible examples, Amazon’s Glacier storage costs one US cent per GB per month, which means you can keep 100TB for a year for a shade under £5,000 per annum.

    Well, for the same 100TB of storage you’d be looking at a smidge over £18,000 on Amazon for their reduced-redundancy option – which, presumably, is fine as it’s your secondary and you have a live copy.

    Sellers can’t ignore new markets…

    Vendors of on-premise storage are unsurprisingly also looking to sell you stuff that will enable you to use cloud storage: after all, given that they’re not getting revenue from flogging disks to you, they may as well find ways of extracting your cash by selling cloud-enabling products.

    What do we mean by “secondary?”

    Secondary storage might simply mean a duplicate copy of your core data, which you retain in the cloud in case the primary entity is corrupted, deleted or destroyed. You have choices of how you get the data to the cloud, depending on how immediately accessible you want it:

    Backups: instead of using local disk or tape drives you point your backup software or appliances at the cloud storage area. This is fine if you’ll only need to pull back lost files occasionally and you don’t mind having to do file restores on an ad-hoc basis via the backup application’s GUI
    File-level copies: you replicate data to the cloud storage using a package that spots new files and changes and replicates in near real time (if you’ve ever used Google Drive on your desktop, you’ll know the kind of thing I mean, but we’re talking about the fileserver-level equivalent in this context)
    Application-level: you run your apps in active/passive mode using their inherent replication features – for instance a MySQL master on-prem and an equivalent slave in a VM in the cloud. Actually, this isn’t really storage replication, as the data flying around is application data, not filesystem data

    The second of these three is the common desire: a near-real-time remote copy of large lumps of data. Yes, you’ll often have a bit of the other two but these (particularly app-level replication) tend to represent a minority of the data you’re shuffling.

    Secondary = temporary?

    The other side of secondary storage in the cloud is where you’re one of those companies that genuinely uses the cloud for short-term, high-capacity compute requirements.

    One of the benefits the cloud vendors love to proclaim from atop a convenient mountain, of course, is the idea of pay-for-what-you-use scenarios: running up loads of compute power for a short-term, high-power task then running it down again.

    Does the average business care? Nah – all this stuff about: “Oh, you can hike up your finance server’s power at year-end then turn it down again” is a load of old tosh in most cases. But there are in fact plenty of companies out there with big, occasional requirements – biotech stuff, video rendering, weather simulation, and so on – so real examples are far from non-existent.

    Going the other way

    Another thing you need to remember is that there may well be a time when you want to pull data back from the secondary store into the primary. There are a couple of considerations here: one is that many of the cloud storage providers don’t charge for inbound data transfers (i.e. data flowing into the cloud storage) but they have a per-gigabyte fee for transfers the other way.

    De-duplication is the order of the day in such cases, but in reality the transfer costs are modest (and sometimes they’re free – such as restores from Amazon Glacier).

    The cool bit is that when you combine it with their physical appliances on-premise you can layer a global namespace over the whole lot so that local servers and remote VMs can access volumes transparently. And this means that cloud-based servers can mount on-premise volumes in just the same way as in-house machines can access storage in the cloud.

    Oh, and where’s the primary?

    So we’ve talked about using the cloud as your secondary storage, and we’ve largely assumed that the primary will be on-prem. But does it have to be?

    I mentioned that data transfer out of the cloud is generally chargeable, but it’s also true that data transfer out of a particular cloud provider’s storage into another repository in another of their regions is significantly cheaper (less than a quarter of the price in an example I just checked out) than flinging it out of their cloud to your premises via the net.

    Summing up

    So there you have it. Secondary storage in the cloud is definitely feasible, but you’ll want to use some kind of access appliance to optimise throughput.

    Once you do go down this cloud route, treat your primary and secondary storage as a single entity, so that each end can access the other equally easily.

    And, finally, when designing that cloud-based secondary storage don’t forget to think about where the primary volumes should live too.

    Reply
  47. Tomi Engdahl says:

    Russell Brandom / The Verge:
    Chicago’s 9% cloud tax now applies to “electronically delivered amusements” like Netflix as well as “nonpossessory computer leases” which could include AWS — Chicago’s ‘cloud tax’ makes Netflix and other streaming services more expensive

    Chicago’s ‘cloud tax’ makes Netflix and other streaming services more expensive
    Old city yells at cloud
    http://www.theverge.com/2015/7/1/8876817/chicago-cloud-tax-online-streaming-sales-netflix-spotify

    The past five years have seen a huge shift in the way we consume media, as brick-and-mortar stores shift to digital subscriptions. It’s been a valuable tradeoff for some, building billion-dollar companies and unlocking huge libraries of music and video for relatively paltry subscription fees, but it’s also been a challenge for cities that rely on those businesses for revenue. Now, Chicago wants to take back those missing taxes, and the way it’s retaking them has some lawyers up in arms.

    Today, a new “cloud tax” takes effect in the city of Chicago, targeting online databases and streaming entertainment services. It’s a puzzling tax, cutting against many of the basic assumptions of the web, but the broader implications could be even more unsettling. Cloud services are built to be universal: Netflix works the same anywhere in the US, and except for rights constraints, you could extend that to the entire world. But many taxes are local — and as streaming services swallow up more and more of the world’s entertainment, that could be a serious problem.

    Although the tax is technically levied on consumers, some companies are already preparing to collect it as part of the monthly bill.

    The result for services is both higher prices and a new focus on localization. For the web services portion, the most likely effect is simply moving servers outside of the city limits — and, where possible, the offices that use them.

    Once implemented, streaming services will also have to keep closer track of which subscribers fall under the new tax, whether through billing addresses or more restrictive methods like IP tracking, which is already used to enforce rights restrictions.

    But while the law may seem onerous, it’s also a response to an increasingly difficult reality for cash-strapped cities, particularly as online services start to take a bite out of the businesses in the urban center. Twenty years ago, the same albums and movies were consumed at video rental outlets and music stores — which paid local property taxes, potentially paired with municipal sales taxes and other brick-and-mortar duties.

    Reply
  48. Tomi Engdahl says:

    Adobe Creative Cloud 2015 launches – and gets Android in on the act
    Shrinks time, enlarges your wrinkles and gets hazy
    http://www.theregister.co.uk/2015/06/16/adobe_launch_creative_cloud_2015/

    Adobe has updated its Creative Cloud Suite for 2015, bringing enhancements and new features to 15 desktop applications and delivering tighter integration for its desktop and mobile users. Adobe has also let Android in on the mobile party with versions of Brush, Color, Ps Mix and Shape being made available to the platform for the first time.

    The Creative Cloud update sees even more convergence among the desktop applications, with Adobe adopting the term CreativeSync to describe the synchronisation that occurs in the workflow.

    Reply
  49. Tomi Engdahl says:

    Japan’s NTT whips out OpenStack cannon at cloud Godzilla AWS
    Tokyo wants to avoid head-to-head with Amazon
    http://www.theregister.co.uk/2015/07/02/ntt_not_taking_on_aws_public_cloud/

    Tokyo-headquartered NTT Communications has ruled out a head-to-head public-cloud fight with Amazon Web Services – despite NTT expanding its cloud systems globally.

    NTT execs said Thursday in London that the $112bn telecom and data giant would compete by offering public and private cloud, management, and data center services.

    Motoo Tanaka, senior vice president of cloud services, said “Amazon is so overwhelmingly strong we are often asked how are we going to differentiate versus them.

    “It is not conceivable for us just to compete with Amazon simply on public cloud. In most ICT sales it’s required we provide a combination of public cloud, data center, private cloud, and security with infrastructure – we’d like to provide a combination of these.”

    NTT joined the OpenStack Foundation in May, pledging to use the open-source cloud architecture to strengthen its own public-cloud service. NTT is a hero among OpenStackers for being an early champion and adopter of their religion. In February, NTT announced Elastic Service Infrastucture (ESI), putting OpenStack on Juniper gear.

    OpenStack forms the basis of NTT’s Next-Generation Cloud Platform, due in December, which will allow NTT to set up and manage both shared and dedicated bare metal server network access.

    Gartner’s cloud magic quadrant positions AWS – with Microsoft Azure and Salesforce – as leaders in terms of IaaS, PaaS, and/or cloud storage.

    NTT is consigned with a bunch of other OpenStack flyers to the bottom-left, struggling to stand out and labeled niche players lacking both vision and an ability to execute in Gartner’s fateful matrix.

    Reply
  50. Tomi Engdahl says:

    Amazon Launches Machine-Learning Platform
    http://insights.dice.com/2015/04/13/amazon-launches-machine-learning-platform/?icid=ON_DN_UP_JS_AV_OG_RA_1

    Ever wanted a machine-learning platform capable of making predictions based on your in-house data? Even if the thought never actually crossed your mind, you can’t deny that the concept has a certain appeal: Who wouldn’t want a system capable of offering up solid advice, based on everything your business has done?

    Amazon, which has evidently never met some aspect of storage and analytics it didn’t want to try and sell for cheap to businesses all over the world, has just announced a Machine Learning service, complete with visualization tools.

    The cloud-based platform will walk developers through the creation of machine learning (ML) models; it also includes “simple” APIs that allow apps to call the eventual predictions.

    As with Amazon’s other cloud services, the Machine Learning’s pricing is based on use: $0.42 an hour for data analysis and modeling, $0.10 per 1,000 batch predictions, and $0.0001 per real-time prediction.

    In theory, businesses can use machine-learning products for everything from fraud detection and customer-churn prediction to modeling for customer support and marketing campaigns. In reality, though, it remains to be seen how many businesses feel the need

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*