Docker and other Linux containers

Virtual machines are mainstream in cloud computing. The newest development on the this arena are fast and lightweight process virtualization.  Linux-based container infrastructure is an emerging cloud technology that provides its users an environment as close as possible to a standard Linux distribution.

Linux Containers and the Future Cloud article tells that as opposed to para-virtualization solutions (Xen) and hardware virtualization solutions (KVM), which provide virtual machines (VMs), containers do not create other instances of the operating system kernel. This brings advantage of containers over VMs is that starting and shutting down a container is much faster than starting and shutting down a VM. The idea of process-level virtualization in itself is not new (remember Solaris Zones and BSD jails).

All containers under a host are running under the same kernel. Basically, a container is a Linux process (or several processes) that has special features and that runs in an isolated environment, configured on the host.  Containerization is a way of packaging up applications so that they share the same underlying OS but are otherwise fully isolated from one another with their own CPU, memory, disk and network allocations to work within – going a few steps further than the usual process separation in Unix-y OSes, but not completely down the per-app virtual machine route. The underlying infrastructure of modern Linux-based containers consists mainly of two kernel features: namespaces and cgroups. Well known Linux container technologies are Docker, OpenVZ, Google containers, Linux-VServer and LXC (LinuX Containers).

Docker is an open-source project that automates the creation and deployment of containers. Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows.
Docker started as an internal project by a Platform-as-a-Service (PaaS) company called dotCloud at the time, and now called Docker Inc. Docker is currently available only for Linux (Linux kernel 3.8 or above). It utilizes the LXC toolkit. It runs on distributions like Ubuntu 12.04, 13.04; Fedora 19 and 20; RHEL 6.5 and above; and on cloud platforms like Amazon EC2, Google Compute Engine and Rackspace.

Linux containers are turning to a way of packaging up applications and related software for movement over the network or Internet. You can create images by running commands manually and committing the resulting container, but you also can describe them with a Dockerfile. Docker images can be stored on a public repository. Docker is able to create a snapshot. Docker, the company that sponsors the Docker.org open source project, is gaining allies in making its commercially supported Linux container format a de facto standard. Red Hat has woken up to the growth of Linux containers and has begun certifying applications running in the sandboxing tech.

Docker was last week a lot in IT news because Docker 1.0 has been released. Here are links to several articles on Docker:

Docker opens online port for packaging and shipping Linux containers

Docker, Open Source Application Container Platform, Has 1.0 Coming Out Party At Dockercon14

Google Embraces Docker, the Next Big Thing in Cloud Computing

Docker blasts into 1.0, throwing dust onto traditional hypervisors

Automated Testing of Hardware Appliances with Docker

Continuous Integration Using Docker, Maven and Jenkins

Getting Started with Docker

The best way to understand Docker is to try it!

This Docker thing looks interesting. Maybe I should spend some time testing it.

 

340 Comments

  1. Tomi Engdahl says:

    Google, Microsoft, IBM And Others Collaborate To Make Managing Docker Containers Easier
    http://techcrunch.com/2014/07/10/google-microsoft-ibm-and-others-collaborate-to-make-managing-docker-containers-easier/

    It’s not often that you see this combination of backers, but today, Microsoft, Red Hat, IBM, Docker, Mesosphere, CoreOS and SaltStack all banded together to support Google’s open-source Kubernetes project for managing Docker containers.

    Docker containers are quickly becoming the go-to technology for building and running distributed applications. Every major cloud vendor has gotten behind the Docker project over the last few months and Docker.io itself recently raised $15 million in a Series B round to continue expanding its services around the platform.

    Reply
  2. Tomi Engdahl says:

    VMware hangs with the cool kids in the Containers gang
    We almost invented containers but were too shy to talk about them says CTO
    http://www.theregister.co.uk/2014/08/13/vmware_hangs_with_the_cool_kids_in_the_containers_gang/

    If 2014 has a hotter infrastructure software topic than containerisation, your correspondent is yet to find it.

    The excitement comes from the fact that containerisation has been proved to work at colossal scale and looks to represent a lightweight and easy-to-manage alternative to virtualisation. Much discussion of containerisation has therefore positioned it either as virtualisation’s heir, or a fork in the road that means the likes of VMware won’t have things all to themselves from now on. That OpenStack will more or less treat containers and virtual machines as equals adds spice to the pot.

    Reply
  3. Tomi Engdahl says:

    Docker kicks KVM’s butt in IBM tests
    Big Blue finds containers are speedy, but may not have much room to improve
    http://www.theregister.co.uk/2014/08/18/docker_kicks_kvms_butt_in_ibm_tests/

    IBM Research has run done a side-by-side comparison of the KVM hypervisor and containerisation enfant terrible Docker and found the latter “ equals or exceeds KVM performance in every case we tested.”

    Big Blue tested the two using the linear-equation solving package Linpack, the STREAM benchmark of memory bandwidth, network bandwidth using nuttcp, latency using netperf, Block I/O speeds with fio and Redis. The SysBench oltp benchmark gave MySQL a workout.

    With Docker only just having reached v 1.0 status, you might think that’s goodnight for virtualisation, as if the first commercial version of the technology is already beating and established tool surely there’s no future for the latter.

    “containers because they started with near-zero overhead and VMs have gotten faster over time.”

    Nor is Docker perfect, with the authors finding it network address translation makes extra traffic for networks.

    “Conventional wisdom (to the extent such a thing exists in the young cloud ecosystem) says that IaaS is implemented using VMs and PaaS is implemented using containers. We see no technical reason why this must be the case, especially in cases where container-based IaaS can offer better performance or easier deployment.”

    “Rather than maintaining different images for virtualized and non-virtualized servers, the same Docker image could be efficiently deployed on anything from a fraction of a core to an entire machine.”

    Reply
  4. Tomi Engdahl says:

    Operating Systems Still Matter In a Containerized World
    http://tech.slashdot.org/story/14/08/19/2348251/operating-systems-still-matter-in-a-containerized-world

    With the rise of Docker containers as an alternative for deploying complex server-based applications, one might wonder, does the operating system even matter anymore? Certainly the question gets asked periodically. Gordon Haff makes the argument on Opensource.com that the operating system is still very much alive and kicking,

    Reply
  5. Tomi Engdahl says:

    Why the operating system matters in a containerized world
    http://opensource.com/business/14/8/why-operating-systems-matter

    Applications running in Linux containers are isolated within a single copy of the operating system running on a physical server. This approach stands in contrast to hypervisor-based virtualization in which each application is bound to a complete copy of a guest operating system and communicates with the hardware through the intervening hypervisor. As a result, containers consume very few system resources such as memory and impose essentially no performance overhead on the application.

    One of the implications of using containers is that the operating system copies running in a given environment tend to be relatively homogeneous because they are essentially acting as a sort of common shared substrate for all the applications running above. Specific dependencies can be packaged with the application (within an isolated process in userspace), but the kernel is shared among the containers running on a system.

    The operating system is therefore not being configured, tuned, integrated, and ultimately married to a single application as was the historic norm, but it’s no less important for that change. In fact, because the operating system provides the framework and support for all the containers sitting above it, it plays an even greater role than in the case of hardware server virtualization where that host was a hypervisor.

    All the security hardening, performance tuning, reliability engineering, and certifications that apply to the virtualized world still apply in the containerized one. And, in fact, the operating system shoulders a greater responsibility for providing security and resource isolation than in the case where a hypervisor is handling some of those tasks.

    Yes, there is absolutely an ongoing abstraction of the operating system; we’re moving away from the handcrafted and hardcoded operating instances that accompanied each application instance—just as we previously moved away from operating system instances lovingly crafted for each individual server. And, yes, applications that depend on this sort of extensive operating system customization to work are not a good match for a containerized environment.

    Reply
  6. Tomi Engdahl says:

    One way of looking at Docker is that it’s an entirely new format for packaging applications, one that obviates the need for distro-specific package formats.

    The natural extension of the Docker idea is CoreOS, a Linux distribution in which all applications are delivered as containerized images and the core distro ships with only the bare minimum of software needed to boot and run the system. Compared to Fedora, CoreOS is almost the anti-distro.

    Source: http://www.theregister.co.uk/2014/08/25/are_linux_distros_boring/?page=2

    Reply
  7. Tomi Engdahl says:

    VMware Partners With Docker, Pivotal And Google To Bring Container Support To Its Platform
    http://techcrunch.com/2014/08/25/vmware-partners-with-docker-pivotal-and-google-to-bring-container-support-to-its-platform/

    VMware today announced that it is partnering with Docker, Google and Pivotal to bring support for Docker containers to its platform. In addition, the company said that it will work with the Kubernetes community to bring that project’s container management solution to enterprises.

    At first glance, the Docker project and VMware should be at odds with each other. In many ways, Docker containers negate the need for a solution like VMware because the VMware model of the “software-defined data center” is squarely based on the idea of using its own virtual machines. There is no reason the two can’t co-exist, however — even in the same data center — given that you could run a container within a virtual machine.

    “With Docker, Google and Pivotal, we are simplifying the way enterprises develop, run and manage all application types on a common platform at scale,” said Ben Fathi, the chief technology officer of VMware in a statement today. “In this way, Docker containers and virtual machines provide an IT environment without compromise. Together, we are optimizing containers for the enterprise – ensuring they run effectively in software-defined data center environments.”

    Reply
  8. Tomi Engdahl says:

    Parallels to line up with Linux containers
    Virtuozzo’s best bits brought to bear on unruly containers
    http://www.theregister.co.uk/2014/09/03/parallels_to_line_up_with_linux_containers/

    Parallels is working to bring its automation, security and management wares to the burgeoning world of Linux containerisation.

    The junior virtualiser finds itself in an interesting position vis a vis Linux containers and Docker, because it has long described its own Virtuozzo product as offering containers. But Virtuozzo is closer to conventional virtualisation than containerisation, because it wraps an operating system rather than just an application.

    With Docker popularising the idea that one operating system serving lots of apps in containers, and the likes of Google saying they use this trick dozens of times every minute, containers have become So Hot Right Now.

    But as many are pointing out, most recently Cisco and Red Hat, Linux containers currently lack many of the niceties that makes it possible to use them in an enterprise environment

    Parallels thinks the automation and management products it offers service providers, and the occasional CIO, could usefully be brought to bear making containers behave.

    Also in the pipeline is the OpenStack support the company has been contemplating for some time.

    Reply
  9. Tomi Engdahl says:

    REVIEW: RHEL 7 anchors enterprise-focused ecosystem
    http://www.networkworld.com/article/2466011/opensource-subnet/review-rhel-7-anchors-enterprise-focused-ecosystem.html

    Latest version of Red Hat focuses on containerized instances of the OS.

    Red Hat Enterprise Linux 7 is more proof that operating systems aren’t dead, they’re becoming vessels for containerized applications. RHEL 7 performed well in our testing, but it’s worth noting that this no longer just a simple OS – it’s an increasingly abstracted component in the larger Red Hat ecosystem.

    Reply
  10. Tomi Engdahl says:

    Another day, another virtualiser gives Docker a big sloppy kiss
    Xen promises its Orchestrator will treat VMs and Containers with equal respect
    http://www.theregister.co.uk/2014/09/09/another_day_another_virtualiser_gives_docker_a_big_sloppy_kiss/

    Scarcely a day passes without one enterprise software outfit or another declaring that Docker’s version of Linux containerisation is the mutt’s nuts, and on Monday the Xen Project took its turn.

    Like anyone else capable of spelling “VM”, the folks at Xen appreciate containerisation’s speed, light weight, density and overall simplicity. Like everyone in the data centre caper, Xen also worries what happens if the single OS beneath lots of containers is attacked. And like just about anyone that can already wrangle a fleet of virtual machines, Xen thinks the kit it has developed for the rather-more-mature world of virtual computing can be turned to management of Docker containers, or at least to easy creation and handling of VMs running Docker.

    Xen says its Orchestra (XO) tool is the one for the job, especially once the project gets around to tweaking it

    Olivier Lambert, the creator of Xen Orchestra Project, writes that “With Docker and Xen on the same team, the two technologies work in tandem to create an extremely efficient, best-of-breed infrastructure”. He adds that “Finally uniting them in one interface is a big leap ahead!”

    Reply
  11. Tomi Engdahl says:

    Consolidation in the context of virtualization means running multiple virtual machines on a single host machine.

    In datacenter virtualization, consolidation is well established and well understood.

    In the cloud, consolidation is much more recent arrival. Recently, there has been some excitement around Linux containers, for example through the Docker project. Also it is well known that that Heroku uses containers to run the “Dyno’s” that power customer’s web applications.

    Container technology however is quite limited because it isn’t really virtualization. The container runs the same OS kernel as the host, which means that it pretty much has to run the same operating system, or if not something very close.

    Full virtualization does not have this limitation, and allows running arbitrary operating systems in the guest.

    Source: http://www.ravellosystems.com/blog/nested-virtualization-achieving-up-to-2x-better-aws-performance/?obr

    Reply
  12. Tomi Engdahl says:

    Docker scores 40 MILLION greenbacks to pop business into boxes
    18 million downloads in three months? Call the money men
    http://www.theregister.co.uk/2014/09/16/docker_scores_40m_more_cash_to_containerise_business/

    Containerisation mavens Docker have scooped $40m in series C funding.

    Docker promotes a way of running applications inside portable “containers” that differ from virtual machines in that many can run on a single operating system.

    The likes of Google use containers at colossal scale and the concept has proven so compelling that in recent weeks Cisco, VMware and Parallels have all lined up to endorse and embrace Docker.

    “will use the funds to drive adoption of its platform in the enterprise and to broaden its rapidly growing ecosystem”

    Reply
  13. Tomi Engdahl says:

    CentOS, Docker, and Systemd
    http://jperrin.github.io/centos/2014/09/25/centos-docker-and-systemd/

    Over the last few weeks, we’ve been asked about using systemd inside the CentOS-7 Docker containers for more complex operation. Because systemd offers a number of rather nice features, I can completely understand why people want to use it rather than pulling in outside tools like supervisord to recreate what already exists in centos by default. Unfortunately it’s just not that easy.

    There are a couple major reasons why we don’t include systemd by default in the base Docker image.

    If you’re okay with running your container with –privileged, then you can follow the steps below to create your systemd enabled docker image from the CentOS-7 base image.

    Reply
  14. Tomi Engdahl says:

    Docker acqui-slurps Koality
    This one’s for you, devs, to stop containers spilling into messy projects
    http://www.theregister.co.uk/2014/10/07/docker_acquislurps_koality/

    Containerisation darling Docker has made an acquisition, slurping Koality for an undisclosed sum.

    Koality offers a continuous integration product, which Docker wants because CEO Ben Golub sees his company’s eponymous product being used during the software development lifecycle. Docker’s not a continuous integration company and Golub told The Reg it doesn’t want to become one. But he does want Docker to be developers’ friend as they take software from early development to testing, quality assurance and eventual deployment. Slurping Koality will help Docker to do that and, importantly, make it possible to do so in a hybrid cloud once the acquired company’s code is added to Docker’/s own Enterprise Hub.

    Golub’s happy to call this deal an “acqui-hire” as Koality has just four staff. All will relocate to Dockers offices, effective today.

    Reply
  15. Tomi Engdahl says:

    Docker’s app containers are coming to Windows Server, says Microsoft
    MS chases app deployment speeds already enjoyed by Linux devs
    http://www.theregister.co.uk/2014/10/15/docker_app_containers_coming_to_windows_server/

    Microsoft has announced new container support in the next version of Windows Server, along with an open source implementation of the Docker Engine.

    Docker is a way of packaging applications into an isolated and standardised bundle, enabling multiple “Dockerized” apps to run on a single server.

    Virtual machines (VMs) have a similar advantage, but are more heavyweight since each VM runs an entire operating system, whereas a Dockerized app is smaller and faster to start.

    Docker was developed for Linux and fits well with current trends including continuous delivery, where the time between amending application code and depoloying it is reduced to a minimum; microservices, where applications are composed from many services each with a narrowly defined purpose; and DevOps, integrated application development with IT operations.

    Microsoft hates to be left out, and already offers support for Docker on Linux in its Azure cloud. Now it has announced the Docker Engine on Windows Server

    The support for Docker comes alongside a new feature called Windows Server Containers

    Reply
  16. Tomi Engdahl says:

    Running MariaDB, FreeIPA, and More with CentOS Containers
    http://seven.centos.org/2014/10/running-mariadb-freeipa-and-more-with-centos-containers/

    The CentOS Project is pleased to announce four new Docker images in the CentOS Container Set, providing popular, ready to use containerized applications and services. Today you can grab containers with MariaDB, Nginx, FreeIPA, and the Apache HTTP Server straight from the Docker Hub.

    The new containers are based on CentOS 7, and are tailored to provide just the right set of packages to provide MariaDB, Nginx, FreeIPA, or The Apache HTTP Server right out of the box.

    The first set of applications and services provide two of the world’s most popular Web servers, MariaDB for your database needs, and FreeIPA to provide an integrated security information management solution.

    To get started with one of the images, use: `docker pull centos/` where is the name of the container (*e.g.* `docker pull centos/mariadb`). You can find some quick “getting started” info on the Docker Hub page for each application.

    Reply
  17. Tomi Engdahl says:

    Microsoft, Docker bid to bring Linux-y containers to Windows: What YOU need to know
    It ain’t there yet, but Redmond vows to make it work
    http://www.theregister.co.uk/2014/10/16/windows_containers_deep_dive/

    Analysis Containers are all the rage with Linux sysadmins these days, and now Microsoft and Docker say they’re going to bring that same virtualization-beating goodness to Windows. But just what will that look like and how will it work?

    First things first. One thing Microsoft’s new partnership with Docker won’t let you do is take any of the estimated 45,000 containers in the Docker Hub today and run them on Windows. Unlike virtualization, containers don’t let you run Linux on top of another OS, which is what you’d need to do to launch all of those prepackaged Linux binaries.

    Instead, what containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other.

    What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications. This is especially handy for what Docker terms “cloud native” apps, where instead of deploying servers loaded with monolithic application stacks, admins spin up multiple “microservices” on virtual machine instances that then combine to form the complete product.

    “If you look at the new modern web startups like Netflix or Yelp or Gilt Groupe or Groupon, they’re all developing applications differently,” Scott Johnston, Docker’s senior VP of product, told The Reg in a briefing on Wednesday. “They’re developing discrete components that are then aggregated together to create the final service that the consumer or the web browser sees.”

    While containers and Docker have become virtually synonymous in the Linux world, however, it’s easy to forget that Docker didn’t invent containers. Its software runs on top of a number of other, preexisting technologies, including Linux Containers (LXC) and the cgroups and namespaces capabilities built into the modern Linux kernel.

    Just how Windows Server containers will function, however, Redmond isn’t saying for now – although Gardler said Microsoft has been keeping the technology in its back pocket for several years.

    “Since around about 2005 we’ve been running containerized applications on our own platforms internally, and so we have a lot of experience with containers.”

    That’s not so far-fetched. Containerization wasn’t invented for Linux, either. Similar technologies, such as Solaris Containers and FreeBSD Jails, have been around for years.

    What Microsoft has said is that the containers will support running applications built using both .Net and other application types, including apps written in C++, Java, Node.js, and so on. What’s more, Redmond is committed to ensuring that Windows Server containers will be manageable using the same Docker tools that Linux admins use to deploy and manage containerized applications today.

    Linux and Windows: Pals in the cloud

    According to Docker’s Johnston, what makes this exciting is that application builders will be able to create heterogeneous distributed apps where microservices can run on either Linux or Windows hosts – whichever is appropriate for each service – yet they can all still be managed using the same tools.

    “Dockerized Windows apps will run on Windows hosts and Dockerized Linux apps will still run on Linux hosts,” Johnston told El Reg. “But the collection of apps that constitute a distributed application or a distributed service can absolutely interoperate with each other.”

    And since we’re talking about cloud native apps, Microsoft is also planning to integrate Docker with its Azure public cloud. That process began in June, when it added new features to its Azure command-line tools that made it easier to deploy Docker containers to Linux VMs running on Azure. With this new partnership, it also plans to let Docker users do it the other way around.

    Reply
  18. Tomi Engdahl says:

    Google may hook Kubernetes deep into own cloud
    ‘Highly differentiated experience’ promised, details likely at November gabfest
    http://www.theregister.co.uk/2014/10/29/google_may_hook_kubernetes_deep_into_own_cloud/

    Google’s Cloud Platform Live event in the USA next week may offer up some news on how The Chocolate Factory will allow developers to put Kubernetes to work in its own cloud.

    Kubernetes is a tool Google developed and used to make containerisation more useful by making it possible to manage containerised applications. As explained by Craig McLuckie, Google’s point man for all things cloud, Docker is very good at helping developers to create apps running in containers. Kubernetes tries to take things further by getting code in containers to work together to deliver an application, and to help manage those containers and their joint and interlinked operations once an app goes into production.

    Kubernetes can work alongside any Docker implementation, and therefore in any of the major clouds that can handle Docker. Which as of two weeks ago, when Microsoft became the latest cloud operator to embrace Docker, is just about everyone that matters.

    Google Cloud Platform also supports Kubernetes. But as Google developed Kubernetes out of code it needed for its own operations, it’s in a position to make the software work especially well on its own cloud.

    Might Google do it?

    Reply
  19. Tomi Engdahl says:

    CoreOS offers private Docker container registries for world+dog
    Your containers, your data center, behind your firewall
    http://www.theregister.co.uk/2014/10/30/coreos_enterprise_registry/

    Container-loving Linux vendor CoreOS has made its on-premises Docker container registry software available as a standalone product.

    Previously, CoreOS Enterprise Registry was only available as part of the company’s Premium Managed Linux offering, which it describes as “OS as a service.”

    As of Thursday, it is now available for use with any Docker-enabled OS – and these days, what Linux distro hasn’t gone gaga for Docker? Even Microsoft is getting into the act.

    allows companies to store and manage images of containerized applications that can be instantiated as Docker containers.

    CoreOS is fond of describing the Quay.io online service as being like a Github for Docker images. Sticking with that metaphor, CoreOS Enterprise Registry is like Github Enterprise, in that companies can install it in their own data centers, behind their own firewalls.

    That makes it well suited for companies that want to deploy containerized Linux environments but don’t want to trust outside vendors with the intellectual property that goes into their containers.

    The software includes a granular, per-user permissions system, where certain team members can be granted access to modify repositories while others can be given read-only access, for example. It also allows admins to configure “robot accounts” for use with automated processes.

    Reply
  20. Tomi Engdahl says:

    Google Cloud Platform Live: Introducing Container Engine, Cloud Networking and much more
    http://googlecloudplatform.blogspot.fi/2014/11/google-cloud-platform-live-introducing-container-engine-cloud-networking-and-much-more.html

    Google Container Engine: run Docker containers in compute clusters, powered by Kubernetes
    Google Container Engine lets you move from managing application components running on individual virtual machines to launching portable Docker containers that are scheduled into a managed compute cluster for you. Create and wire together container-based services, and gain common capabilities like logging, monitoring and health management with no additional effort. Based on the open source Kubernetes project and running on Google Compute Engine VMs, Container Engine is an optimized and efficient way to build your container-based applications. Because it uses the open source project, it also offers a high level of workload mobility, making it easy to move applications between development machines, on-premise systems, and public cloud providers. Container-based applications can run anywhere, but the combination of fast booting, efficient VM hosts and seamless virtualized network integration make Google Cloud Platform the best place to run them.

    Managed VMs in App Engine: PaaS – Evolved
    App Engine was born of our vision to enable customers to focus on their applications rather than the plumbing. Earlier this year, we gave you a sneak peek at the next step in the evolution of App Engine — Managed VMs — which will give you all the benefits of App Engine in a flexible virtual machine environment. Today, Managed VMs goes beta and adds auto-scaling support, Cloud SDK integration and support for runtimes built on Docker containers. App Engine provisions and configures all of the ancillary services that are required to build production applications — network routing, load balancing, auto scaling, monitoring and logging — enabling you to focus on application code. Users can run any language or library and customize or replace the entire runtime stack (want to run Node.js on App Engine? Now you can). Furthermore, you have access to the broader array of machine types that Compute Engine offers.

    Reply
  21. Tomi Engdahl says:

    Shippable raises $8 million to help companies accelerate software development
    http://www.geekwire.com/2014/shippable-raises-8-million/

    Shippable helps teams more easily ship code, helping them do so more quickly and efficiently. The company says it can reduce an organization’s development and test lab footprint by 70 percent, improving application development quality along the way. It competes with Travis CI and Circle CI, with plans to expand more aggressively into the enterprise arena.

    At this time, Cavale said about 1,600 businesses are using its tools. The service runs on Amazon Web Services, and is built on Docker.

    Cavale said his company’s hosted service removes the need for utilizing virtual machines in the development environment, creating an “automatic pipeline between GitHub and whatever cloud provider you want to use.”

    “We use containers to remove the need for VMs,” said Cavale, a former Microsoft manager in the Azure group. “Containers provide a tremendous advantage because the spin-up time for a container is less 10 or 15 seconds. A spin-up time for a VM is 15 to 20 minutes, so what you get is a faster build time, as well as a cheaper infrastructure for your bills because you are no longer spinning all of those VMs.”

    https://www.shippable.com/

    Reply
  22. Tomi Engdahl says:

    Canonical pushes LXD, its new mysterious drug for Linux containers
    We take the hype out of Ubuntu maker’s non-hypervisor hypervisor
    http://www.theregister.co.uk/2014/11/06/canonical_announces_lxd_container_hypervisor/

    Canonical, the company behind the popular Ubuntu Linux distribution, says it’s working on a new “virtualization experience” based on container technologies – but just how it will operate remains something of a mystery.

    Canonical founder and erstwhile space tourist Mark Shuttleworth announced the new effort, dubbed LXD and pronounced “lex-dee,” during a keynote speech at the OpenStack Expo in Paris on Tuesday.

    “Take all the speed and efficiency of docker, and turn it into a full virtualisation experience,” Canonical beams on the LXD homepage. “That’s the goal of Canonical’s new initiative to create the next big hypervisor around Linux container technologies.”

    With LXD, the company says, admins will be able to spin up new machine instances in “under a second” and launch hundreds of them on a single server, all with airtight security. The LXD software itself will provide a RESTful API for managing these container images with easy-to-use command line tools, either locally via a Unix socket or over the internet.

    Reply
  23. Tomi Engdahl says:

    Demystifying Kubernetes: the tool to manage Google-scale workloads in the cloud
    http://www.computerweekly.com/feature/Demystifying-Kubernetes-the-tool-to-manage-Google-scale-workloads-in-the-cloud

    Once every five years, the IT industry witnesses a major technology shift. In the past two decades, we have seen server paradigm evolve into web-based architecture that matured to service orientation before finally moving to the cloud. Today it is containers.

    When launched in 2008, Amazon EC2 was nothing short of a revolution – a self-service portal that launched virtual servers at the click of a button fundamentally changed the lives of developers and IT administrators.

    Docker resurrecting container technology

    The concept of containers is not new – FreeBSD, Solaris, Linux and even Microsoft Windows had some sort of isolation to run self-contained applications. When an application runs within a container, it gets an illusion that it has exclusive access to the operating system. This reminds us of virtualisation, where the guest operating system (OS) lives in an illusion that it has exclusive access to the underlying hardware.

    Containers and virtual machines (VMs) share many similarities but are fundamentally different because of the architecture. Containers run as lightweight processes within a host OS, whereas VMs depend on a hypervisor to emulate the x86 architecture. Since there is no hypervisor involved, containers are faster, more efficient and easier to manage.

    One company that democratised the use of Linux containers is Docker. Though it did not create the container technology, it deserves the credit for building a set of tools and the application programming interface (API) that made containers more manageable.

    Though Docker hogs the limelight in the cloud world, there is another company that mastered the art of running scalable, production workloads in containers. And that is Google, which deals with more than two billion containers per week. That’s a lot of containers to manage. Popular Google services such as Gmail, Search, Apps and Maps run inside containers.

    With Google entering the cloud business through App Engine, Compute Engine and other services, it is opening up the container management technology to the developers.

    New era of containers with Kubernetes

    One of the first tools that Google decided to make open source is called Kubernetes, which means “pilot” or “helmsman” in Greek.

    Kubernetes works in conjunction with Docker. While Docker provides the lifecycle management of containers, Kubernetes takes it to the next level by providing orchestration and managing clusters of containers.

    Kubernetes
    http://kubernetes.io/
    Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops.

    Reply
  24. Tomi Engdahl says:

    Docker’s Orchard acquisition part of aggressive roadmap
    http://searchcloudcomputing.techtarget.com/news/2240225341/Dockers-Orchard-acquisition-part-of-aggressive-roadmap

    The San Francisco start-up purchased London-based Orchard Laboratories, Ltd., a two-person company and early contributor to the Docker open-source project, to improve its multi-container orchestration and composition. Terms of the deal were not disclosed.

    Reply
  25. Tomi Engdahl says:

    An Introduction to Kubernetes
    https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes

    Kubernetes is a powerful system, developed by Google, for managing containerized applications in a clustered environment. It aims to provide better ways of managing related, distributed components across varied infrastructure.

    In this guide, we’ll discuss some of Kubernetes’ basic concepts. We will talk about the architecture of the system, the problems it solves, and the model that it uses to handle containerized deployments and scaling.

    If you are not familiar with CoreOS, it may be helpful to review some basic information about the CoreOS system in order to understand the types of environments that Kubernetes is meant to be deployed on.

    Kubernetes, at its basic level, is a system for managing containerized applications across a cluster of nodes. In many ways, Kubernetes was designed to address the disconnect between the way that modern, clustered infrastructure is designed, and some of the assumptions that most applications and services have about their environments.

    An Introduction to CoreOS System Components
    https://www.digitalocean.com/community/tutorials/an-introduction-to-coreos-system-components

    CoreOS is a powerful Linux distribution built to make large, scalable deployments on varied infrastructure simple to manage. Based on a build of Chrome OS, CoreOS maintains a lightweight host system and uses Docker containers for all applications. This system provides process isolation and also allows applications to be moved throughout a cluster easily.

    To manage these clusters, CoreOS uses a globally distributed key-value store called etcd to pass configuration data between nodes. This component is also the platform for service discovery, allowing applications to be dynamically configured based on the information available through the shared resource.

    Reply
  26. Tomi Engdahl says:

    Microsoft backs cloud rival Google’s open-source Kubernetes project
    http://www.computerweekly.com/news/2240224321/Microsoft-backs-cloud-rival-Googles-open-source-Kubernetes-project

    Cloud provider Microsoft has joined rival Google to bring support for the Kubernetes open-source project on its Azure platform. The project is aimed at allowing application and workload portability and letting users avoid supplier lock-in.

    Kubernetes, currently in pre-production beta, is an open-source implementation of container cluster management. It was introduced by Google in June, when it declared support for Docker – the open-source program that enables a Linux application and its dependencies to be packaged as a container. Docker is Linux OS-agnostic, which means even Mac and Windows users are able to run Docker by installing a small Linux kernel on their infrastructure.

    Reply
  27. Tomi Engdahl says:

    Amazon Announces EC2 Container Service For Managing Docker Containers On AWS
    http://techcrunch.com/2014/11/13/amazon-announces-ec2-container-service-for-managing-docker-containers-on-aws/

    At its re:invent developer conference in Las Vegas, Amazon today announced its first Docker-centric product: the EC2 Container Service for managing Docker containers on its cloud computing platform. The service is available in preview now and developers who want to use it can do so free of charge.

    As Amazon CTO Werner Vogels noted today, despite all of their advantages, it’s still often hard to schedule containers and manage them. “What if you could get all the benefits of containers without the overhead?” he asked. With this new service, developers can now run containers on EC2 across an automatically managed cluster of instances.

    With this, Amazon follows in the footsteps of other large cloud vendors. Google, for example, is making major investments in adding more Docker capabilities to its Cloud Platform, including its efforts around Kubernetes, a deep integration into App Engine and its recently launched Container Engine. Microsoft, too, is adding more support for Docker to its Azure platform and is even going as far as supporting the Google-sponsored Kubernetes project.

    As an Amazon executive told me yesterday — without mentioning today’s announcements – Amazon likes to offer the services that its customers are asking for. Clearly, the company has now heard its customers wishes.

    Reply
  28. Tomi Engdahl says:

    Microsoft: Your Linux Docker containers are now OURS to command
    New tool lets admins wrangle Linux apps from Windows
    http://www.theregister.co.uk/2014/11/18/windows_docker_client/

    Microsoft has taken its first baby steps toward integrating Windows with the Docker application containerization tech that’s caught fire in the Linux world, with the release of Docker client software that runs on Windows desktops.

    The client doesn’t let you run Windows applications in containers. Microsoft is still working with Docker on that piece of the puzzle, which it says will arrive in the next version of Windows Server.

    Instead, the software allows you to manage containers running on Linux hosts directly from your Windows client machine, just as you would if you were managing them from a Linux workstation.

    And by “just as you would,” we mean it’s a command-line client. Before you can use it, you’ll need to clone the code from Docker’s GitHub repositories and compile it using the Go language toolchain, available from the Go Project homepage

    From there, you’ll need to know your way around the Windows shell to get any work done with the Docker client, but if you’re familiar with the process on Linux, you should have no trouble – barring a few known issues arising from differences between the Windows and Linux command-line environments.

    Reply
  29. Tomi Engdahl says:

    Microsoft Intros Docker Command Line Interface for Windows
    http://www.datacenterknowledge.com/archives/2014/11/18/docker-containers-microsoft-intros-docker-cli-for-windows/

    Microsoft launched a command line interface for Docker that runs on Windows. Until now, users could only manage Docker containers using a Linux machine or a virtualized Docker environment on a Windows machine.

    Docker is one of the hottest emerging technologies at the intersection of software development and IT infrastructure management, and tech giants are racing to make sure their existing products and services support the open source technology as well as to develop new services around it.

    Reply
  30. Tomi Engdahl says:

    How secure is Docker? If you’re not running version 1.3.2, NOT VERY
    UPGRADE NOW to fix vuln found in all previous versions
    http://www.theregister.co.uk/2014/11/25/docker_vulnerabilities/

    A nasty vulnerability has been discovered in the Docker application containerization software for Linux that could allow an attacker to gain elevated privileges and execute code remotely on affected systems.

    The bug, which has been corrected in Docker 1.3.2, affects all previous versions of the software.

    “No remediation is available for older versions of Docker and users are advised to upgrade,” the company said in a security advisory on Monday.

    The flaw, which has been assigned CVE-2014-6407, relates to how the Docker engine handles file-system image files. Previous versions of the software would blindly follow symbolic and hard links in image archives, which could have allowed an attacker to craft a malicious image that wrote files to arbitrary directories on disk.

    Reply
  31. Tomi Engdahl says:

    Can’t wait for Docker on Windows? Try Spoon
    http://www.infoworld.com/article/2852174/virtualization/cant-wait-for-docker-on-windows-try-spoon.html

    Those waiting impatiently for Docker’s container technology to be available natively in Windows might have to drum their fingers a while longer, given the amount of work still needed to make that happen. In the meantime, other parties are preparing similar, if not inherently compatible, technologies for Windows.

    Spoon app-containerization technology for Windows offers some Docker technology, but with an emphasis on desktop apps

    Unlike Docker, though, Spoon doesn’t leverage any existing virtualization technologies in Windows — not even Hyper-V. Instead, Spoon uses its own custom-built virtualization system. One advantage of this approach: It reduces dependencies on the operating system, so containerized apps can run on any version of Windows back to Windows XP.

    Also unlike Docker, as a solution aimed at both desktops and servers Spoon can stream containerized applications across the network in the same manner as VMware’s ThinApp.

    Likewise, legacy XP applications can be crated up and ported forward to Windows 7 or Windows 8, via a “legacy OS emulation mode” feature. Locally installed applications can be scanned to see if they match apps in Spoon’s repository, and those apps can be packaged to go with their user settings.

    One advantage commonly associated with containers is security, and Spoon professes to offer various granular levels of isolation for containers, including network virtualization. In contrast to Docker, however, Spoon exposes the container to the network by default

    Spoon isn’t open source

    Reply
  32. Tomi Engdahl says:

    Docker, Part 2: Whoa! Spontaneous industry standard! How did they do THAT?
    OSS guys acting as one … it’s not natural
    http://www.theregister.co.uk/2014/12/01/docker_part_2_the_libcontainer_evolution/

    Docker is slowly taking over the world. From its humble origins, which we explored on Friday, as an internal project at dotCloud, through to Microsoft’s recent announcement that it will support Docker natively in Windows, Docker looks set to become a major component of modern IT infrastructure.

    Today, Docker is powered by Libcontainer, rather than the more widespread LXC. The switch has some very real implications for the future of Docker, for its potential adoption and for its interaction with the community.

    Libcontainer matters for the same reason that Android matters: control. Consider for a moment that while there are eleventy squillion distributions of Linux out there, almost nobody says “Android Linux.” They’ll say “Red Hat Linux”, “Ubuntu Linux” or “SuSE Linux”, but they won’t say “Android Linux.”

    Under the careful ministrations of Google, Android became its own “thing”.

    With libcontainer, Docker are doing the exact same thing. Docker isn’t going to play politics with the distros on how and when they implement LXC. Docker isn’t holding group hug meetings to make sure everyone is okay with the emotional impact and philosophical concepts behind each and every decision.

    LIbcontainer is Docker’s own; the company will implement it how they like and if you don’t like it you can go jump in a lake. Run whatever OS you want underneath, but it’s Docker that will provide you your containerisation.

    A bold move

    This is exactly the sort of move that usually doesn’t go down well in the open-source world.

    Docker is giving as good as it gets, and so valuable is the coalition of contributors that even Parallels – who, if you remember, make Docker’s primary competitor, Virtuozzo – are contributing code and working as part of the team.

    In essence, dotCloud managed to forge a cross-corporate industry standard out of competing companies without a lengthy IEEE-like bureaucratic nightmare of a process. More importantly, they got out in front of the thing and did it first; imagine where public/private/hybrid cloud computing would be today if we could have convinced the various players involved to agree to a single virtual machine standard!

    Of course, we have all seen standards fail. Why does Silicon Valley seem so convinced that Docker won’t?

    Most large technology companies would like you to believe that the opinion of the end users is irrelevant. Unfortunately for them, you can stand there screaming orders at the herd until you’re blue in the face, but they’re just as likely to trample you into the ground as move in the direction you want. Docker’s principals have actually learned this lesson and they made the thing easier to use than any available alternative.

    Docker gets easy wins from large enterprises looking to redo their applications. It is a threat here not to VMware or Hyper-v, but to AWS and Azure. Companies that were already willing to recode their apps from scratch for the public cloud will find Docker a hugely compelling alternative.

    Reply
  33. Tomi Engdahl says:

    Red Hat’s Take: We Know Containers
    http://www.buildyourbestcloud.com/896/red-hats-take-we-know-containers

    For some time, Cisco has been using Red Hat’s OpenShift to create containers (or gears, as they’ve been called in OpenShift). These containers run workloads for mobile apps that Cisco users need to do their jobs. DreamWorks builds and tests services to support mobile games using OpenShift containers. And Red Hat’s been putting a lot of energy into adding OpenStack’s Docker (a Linux format for containers) to OpenShift.

    All of these add up to one thing: Red Hat is fully on board with—and well-versed in—the use of open source, Linux containers in cloud computing

    Red Hat’s long-time experience has led the company to Docker, both as a proponent of its development and as an adopter. There’s a lot of work going on at Red Hat to add Docker into the next major release of OpenShift Enterprise 3; in fact, a Linux-based container architecture will be at the core of that version. Red Hat and Cisco announced in September they would partner to speed development of Linux application containers. In June, Red Hat added Docker support to the next-gen version of RHEL, Red Hat Enterprise Linux 7. That was preceded by Project Atomic, which Red Hat describes as a “community project to develop technologies for creating lightweight Linux Container hosts, based on next-generation capabilities in the Linux ecosystem.”

    More evidence of Red Hat’s Docker commitment and expertise: the new Software Collections 1.2, a developer toolset that includes Dockerfiles, which is designed to create and deploy container-based applications.

    Baretto says the next version of OpenShift, with Docker at its core, will provide much-needed administrative functions to open-source Linux containers. Because Docker adds an efficient file system abstraction for delivering the exact libraries to any server, a developer can really manage and control his or her stack (developed with whatever—Ruby, Python, JEE, JavaScript, etc.). Such capability really delivers on the concept of a Platform as a Service (PaaS) and makes it easy to control when and how changes are made. As Baretto says, Docker-based containers remove all the nonsense.

    Reply
  34. Tomi Engdahl says:

    Part 3: Docker vs hypervisor in tech tussle SMACKDOWN
    We see you milling around the virty containers, VMware
    http://www.theregister.co.uk/2014/12/02/docker_part_3_containers_versus_hypervisors/

    If you’re willing to start from scratch, give up high availability, the ability to run multiple operating systems on a single server and all the other tradeoffs then Docker really can’t be beaten. You are going to cram more workloads into a given piece of hardware with Docker than with a hypervisor, full stop.

    From the perspective of a cloud provider – or an enterprise large enough to run like one – that’s perfectly okay. Many of the workloads they run don’t need have a lot of the fancier hypervisor-based goodies anyway. They use in-application clustering, or applications that have been recoded for public cloud computing.

    You’re not vMotioning around AWS, and given that it has taken VMware until about the middle of 2015 to get a production version of lock-step fault tolerance with more than one vCPU out the door, don’t be expecting that on a non-VMware public cloud provider any time soon. (You can also bet VMware is going to charge a pretty penny for it.)

    Hypervisors are marvels of advanced infrastructure with tools, techniques and capabilities that containers like Docker may never be able to match. How, exactly, do you move a containerised workload between servers with dramatically different kernel versions or different hardware without adding a hypervisor-like layer of abstractions?

    How will containers scale over time as the existing generation of servers live side-by-side with the next and workloads get transitioned? There are a lot of unanswered questions, and it will be years before we’re sure how containers will fit in the overall technology puzzle.

    Reply
  35. Tomi Engdahl says:

    CoreOS unveils Rocket, a possible competitor to Docker
    https://gigaom.com/2014/12/01/coreos-unveils-rocket-a-possible-competitor-to-docker/

    CoreOS, the Linux operating system specialist that’s been busy this past year making sure its technology powers Docker containers, detailed on Monday a new container technology called Rocket that’s essentially a competitor to Docker.

    Rocket is basically a container engine, like Docker, but without all the extras Docker’s been working on to make itself more enterprise friendly. These features include tools for spinning up cloud servers, the ability to have clustered systems and even networking capabilities, wrote CoreOS CEO Alex Polvi in a blog post.

    Because of the way Docker appears to be shifting from its original idea of creating a “standard container” to what now seems like a container-centric application-development hub for enterprises, CoreOS decided that it needed to step in and develop its own standardized version.

    “We should stop talking about Docker containers, and start talking about the Docker Platform,” wrote Polvi. “It is not becoming the simple composable building block we had envisioned.”

    CoreOS is building a container runtime, Rocket
    https://coreos.com/blog/rocket/

    When we started building CoreOS, we looked at all the various components available to us, re-using the best tools, and building the ones that did not exist. We believe strongly in the Unix philosophy: tools should be independently useful, but have clean integration points. We hope this is reflected in tools that we build, such as etcd, which have seen widespread adoption and use outside CoreOS itself.

    When Docker was first introduced to us in early 2013, the idea of a “standard container” was striking and immediately attractive: a simple component, a composable unit, that could be used in a variety of systems. The Docker repository included a manifesto of what a standard container should be. This was a rally cry to the industry, and we quickly followed. Brandon Philips, co-founder/CTO of CoreOS, became a top Docker contributor, and now serves on the Docker governance board. CoreOS is one of the most widely used platforms for Docker containers, and ships releases to the community hours after they happen upstream. We thought Docker would become a simple unit that we can all agree on.

    Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform. It is not becoming the simple composable building block we had envisioned.

    Reply
  36. Tomi Engdahl says:

    CoreOS’s Docker-rival Rocket: We drill into new container contender
    Can CoreOS achieve liftoff in Linux container space race?
    http://www.theregister.co.uk/2014/12/03/coreos_rocket_deep_dive/

    CoreOS CEO Alex Polvi certainly got the attention of the Docker community on Monday when he announced Rocket, his company’s alternative to the Docker container file format and runtime. But just what is Rocket and what does it offer that Docker doesn’t?

    Simply put, the answer for now is a resounding “not much.” In fact, as deployable software goes, it’s safe to say that Rocket doesn’t even exist. The code posted to GitHub on Monday is not even of alpha quality and is best described as a prototype.

    “Docker killer” probably isn’t the most constructive way to think about Rocket, either – despite Polvi’s incendiary Monday blog post, in which he described Docker’s security as “broken” and its process model as “fundamentally flawed.”

    “I do not think Docker overall is fundamentally flawed, I just think it’s going down a different path than we originally signed up for,” Polvi said.

    Reply
  37. Tomi Engdahl says:

    Docker debuts on-premises software for storing companies’ apps in containers
    http://venturebeat.com/2014/12/04/docker-hub-enterprise/

    At its user conference in Amsterdam today, the executives of hot startup Docker are pulling the covers off new features to maintain and run applications packaged up in the container technology Docker released in an open-source license last year.

    Now, less than three months after announcing a new $40 million funding round, Docker is revealing tools that companies can use — and pay for — to handle serious applications encapsulated in Docker containers. And first up is Docker Hub Enterprise, a new enterprise-friendly storehouse for keeping Docker-style applications.

    Docker Hub Enterprise will become available in an early-access program in February. It comes several months after Docker came out with a hosted service for privately storing applications packaged up in Docker containers.

    Also today, Docker and IBM issued a joint statement that included the announcement of the beta release of the IBM Containers service for running containerized applications on the IBM public cloud. Cloud providers like Joyent and Amazon Web Services have announced similar services in recent weeks.

    Reply
  38. Tomi Engdahl says:

    Docker: Here, take the wheel – now YOU can run your own containers
    New tool lets you grab your Dockers in the privacy of your own firewall
    http://www.theregister.co.uk/2014/12/05/docker_hub_enterprise/

    Docker is all the rage among hip startups and early adopters, but Docker the company would like to get its tech into enterprises, too – which is why it’s working on adapting its hosted Docker Hub service into a product specifically targeting large business customers.

    The forthcoming offering, unsurprisingly named Docker Hub Enterprise (DHE), will be an on-premises version of Docker Hub designed to appeal to security-conscious businesses that are unwilling to trust an outside service to store their application containers.

    Reply
  39. Tomi Engdahl says:

    Canonical Launches “Snappy” Edition Of Ubuntu Core For Container Farms
    http://techcrunch.com/2014/12/09/canonical-launches-snappy-edition-of-ubuntu-core-for-container-farms/

    A few years ago, Ubuntu launched a minimalist “core” version of its operating system for embedded systems. Today, it is launching an alpha version of its new “snappy” edition of Ubuntu Core with transactional updates that is specifically geared toward container farms, large Docker deployments and platform-as-a-service environments. The first place you will be able to see Ubuntu Core in action is on Microsoft Azure (or you could install it on your own servers, of course).

    “Ubuntu Core builds on the world’s favourite container platform and provides transactional updates with rigorous application isolation,” said Mark Shuttleworth, founder of Ubuntu and Canonical. “This is the smallest, safest platform for Docker deployment ever, and with snappy packages, it’s completely extensible to all forms of container or service.” The company’s announcement today calls snappy Ubuntu the “biggest revolution in Ubuntu since we launched our mobile initiative.”

    What makes this new Core edition different from Ubuntu’s previous Core versions is that it uses the same Ubuntu AppArmor security system as Ubuntu’s mobile operating system. This ensures that all the applications you install are completely isolated from each other. The company argues that this will make it “much safer to install applications from a wide range of sources on your cloud deployments.” A problem with one application, after all, is much less likely to have any effect on other applications running on the same system.

    Reply
  40. Tomi Engdahl says:

    Fedora 21 Released
    http://linux.slashdot.org/story/14/12/09/2059252/fedora-21-released

    The Fedora Project has announced the release of Fedora 21. “As part of the Fedora.next initiative, Fedora 21 comes in three flavors: Cloud, Server, and Workstation. Cloud is now a top-level deliverable for Fedora 21, and includes images for use in private cloud environments like OpenStack, as well as AMIs for use on Amazon, and a new “Atomic” image streamlined for running Docker containers.”

    Reply
  41. Tomi Engdahl says:

    VMware exiting 2014 with a bang and a security whimper
    AirWatch p0wned, but vRealize and CloudVolumes realized and DevOps embraced
    http://www.theregister.co.uk/2014/12/12/vmware_exiting_2014_with_a_bang_and_a_security_whimper/

    Mesosphere support gives that ambition a boost because VMware already supports containers (Docker) and container orchestration (Kubernetes). Mesosphere is designed to be an environment in which one deploys Docker and Kubernetes, so supporting it means vSphere can now embrace three layers of a containerised app.

    Reply
  42. Tomi Engdahl says:

    Parallels to adopt Docker as native app format in Cloud Server
    Early container enthusiast forced aboard the bandwagon
    http://www.theregister.co.uk/2014/12/15/parallels_to_adopt_docker_as_native_app_format_in_cloud_server/

    The little virtualizer that can, Parallels, has been doing containerisation for ages: the company’s Virtuozzo software has been running applications in silos that share an underlying operating system since at least 2007, when its predecessor company SWsoft decided to rename itself Parallels. SWsoft got its start in 2001.

    Parallels prefers to concentrate on the service provider market, where it thrives because it has a fat library of applications packaged to run in Virtuozzo containers. Service providers like that approach because it makes it easy for them to spin up an app for customers with a minimum of fuss. That Parallels also has lots of lovely billing and account management tools makes it a popular supplier to service providers.

    Parallels has made lots of approving noises about Docker, signalling it would bake some of its expertise into Linux containers and lending a hand in the ongoing development of libcontainer.

    Reply
  43. Tomi Engdahl says:

    Docker: Sorry, you’re just going to have to learn about it. Today we begin
    Part One: Containers! Containers! Containers!
    http://www.theregister.co.uk/2014/11/28/docker_part_1_the_history_of_docker/

    Docker, meet hype. Hype, meet Docker. Now: Let’s have a sit down here and see if we can work through your neuroses.

    For those of you who don’t yet know about Docker, it is a much-hyped Silicon Valley startup productising (what a horrible unword) Linux containers into something that’s sort of easy to use.

    Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds.

    Docker started out with the standard LXC containers that are part of virtually every Linux distribution out there, but eventually transitioned to libcontainer, its own creation. Normally, nobody would have cared about libcontainer, but as we’ll dig into later, it was exactly the right move at the right time.

    Reply
  44. Tomi Engdahl says:

    I do terrible things sometimes
    http://jperrin.github.io/2015/01/19/i-do-terrible-things-sometimes/

    This is not a how-to, but more of a detailed confession about a terrible thing I’ve done in the last few days.

    The basic concept for this crime against humanity came during a user’s group meeting where several companies expressed overwhelming interest in containers, but were pinned to older, unsupported versions of CentOS due to 3rd party software constraints.

    The basics for how I accomplied this are listed below. They are terrible. Please do NOT follow them.

    All that’s left now is to run docker’s build command, and you have a successfully built a CentOS-4 base container to use for migration purposes, or just to make your inner sysadmin cry. Either way. This is completely unsupported.

    Reply
  45. Tomi Engdahl says:

    Docker: to keep those old apps ticking
    http://www.karan.org/blog/2015/01/16/docker-to-keep-those-old-apps-ticking/

    Got an old, long running app on CentOS-4 that you want to retain ? Running a full blown VM not worth while ? Well, Docker can help. As Jim found out

    Ofcourse, always consider migrating the app to a newer, supported platform like CentOS-6 or 7 before trying these sort of workarounds.

    Reply
  46. Tomi Engdahl says:

    Docker is worth a try

    Docker is a tool for distribution of applications and their required components.
    Docker based containers – standardized and easily moves forward in the production chain.

    Developer loading a container application, the components needed. After this, the container can be run in different environments without the erection of the environment-related work.

    If the application uses, for example, on Ubuntu Linux, the Apache web server and MySQL database, the developer container ready to install these components. The tester may be developer of the finished container and tests can be started without installation components.

    Similarly, the application into production is easy and the necessary cloud service can be purchased from open market – Many cloud services to support the docker now.

    Docker is controlled by a text-based user interface.

    Source: http://summa.talentum.fi/article/tv/1-2015/124811

    Reply
  47. Tomi Engdahl says:

    Containers coming to mobile devices in some form:

    Alastair Stevenson / V3.co.uk:
    Samsung and Good Technology launch container and secure app ecosystem for Knox platform

    Samsung secures Android apps with Good for Knox upgrade
    http://www.v3.co.uk/v3-uk/news/2393459/samsung-secures-android-apps-with-good-for-knox-upgrade

    Samsung and Good Technology have launched a joint mobile security suite for enterprise Android users nearly a year after first announcing plans for the service.

    Good for Samsung Knox combines Good Technology’s app container security tool and enterprise app ecosystem with Samsung’s Knox mobile security and management platform.

    The integration was announced at Mobile World Congress 2014 and creates a ‘Good-Secured’ domain within Knox.

    The domain separates, protects and manages Good Technology’s apps as well as unspecified custom apps that have been checked by the Good Dynamics Secure Mobility Platform.

    The Knox platform is based on the US National Security Agency’s Security Enhanced Linux technology.

    It is designed to offer IT managers similar sandboxing powers to those on the BlackBerry Balance, creating separate encrypted work and personal areas on devices.

    Knox also offers certificate management, VPN+ and enterprise mobility management services, which Good Technology also supports.

    Samsung executive vice president Injong Rhee described the launch as a key step in the firm’s efforts to allay enterprise customers’ concerns about Android security.

    “Together, Samsung and Good are addressing the growing importance of mobility management for enterprises by delivering a secure mobile productivity solution for Android that will relieve organisations of past concerns with Android adoption,” he said.

    Reply
  48. Tomi Engdahl says:

    Oracle tosses its Linux into Docker’s repository
    If you can’t beat ‘em …
    http://www.theregister.co.uk/2015/02/06/oracle_tosses_its_linux_into_dockers_repository/

    Oracle sometimes seems to be a bit miffed by enthusiasm for Linux container darling Docker because its own Solaris “Zones” have done containers for ages.

    Big Red also knows in its heart of hearts that Solaris isn’t for everyone, but reckons its own Linux is for anyone who fancies robust, well-supported Torvalds-spawn. And given that Docker needs an OS in which to run containers, Oracle has therefore decided to make Oracle Linux available in the Docker repository. The company will also package an Oracle-maintained version of MySQL and pop it in the same place.

    Oracle reckons doing so will give developers the chance to do Docker on what it feels is a particularly resilient platform, and also one that’s well-integrated with a database.

    Oracle Linux is free, so there’s no reason developers won’t be interested. Perhaps so interested that a few start paying for support.

    Ubuntu and CentOS top the charts on Docker’s repository and Oracle Linux is well-regarded but not a hit beyond Oracle’s heartland.

    https://registry.hub.docker.com/_/oraclelinux/

    Reply
  49. Tomi Engdahl says:

    Ubuntu wants to be the OS for the Internet of Things
    http://www.zdnet.com/article/ubuntu-wants-to-be-the-os-for-the-internet-of-things/

    Summary:With the use of Docker containers, Canonical wants Ubuntu Linux to become the operating system for smart devices.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*