Docker and other Linux containers

Virtual machines are mainstream in cloud computing. The newest development on the this arena are fast and lightweight process virtualization.  Linux-based container infrastructure is an emerging cloud technology that provides its users an environment as close as possible to a standard Linux distribution.

Linux Containers and the Future Cloud article tells that as opposed to para-virtualization solutions (Xen) and hardware virtualization solutions (KVM), which provide virtual machines (VMs), containers do not create other instances of the operating system kernel. This brings advantage of containers over VMs is that starting and shutting down a container is much faster than starting and shutting down a VM. The idea of process-level virtualization in itself is not new (remember Solaris Zones and BSD jails).

All containers under a host are running under the same kernel. Basically, a container is a Linux process (or several processes) that has special features and that runs in an isolated environment, configured on the host.  Containerization is a way of packaging up applications so that they share the same underlying OS but are otherwise fully isolated from one another with their own CPU, memory, disk and network allocations to work within – going a few steps further than the usual process separation in Unix-y OSes, but not completely down the per-app virtual machine route. The underlying infrastructure of modern Linux-based containers consists mainly of two kernel features: namespaces and cgroups. Well known Linux container technologies are Docker, OpenVZ, Google containers, Linux-VServer and LXC (LinuX Containers).

Docker is an open-source project that automates the creation and deployment of containers. Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows.
Docker started as an internal project by a Platform-as-a-Service (PaaS) company called dotCloud at the time, and now called Docker Inc. Docker is currently available only for Linux (Linux kernel 3.8 or above). It utilizes the LXC toolkit. It runs on distributions like Ubuntu 12.04, 13.04; Fedora 19 and 20; RHEL 6.5 and above; and on cloud platforms like Amazon EC2, Google Compute Engine and Rackspace.

Linux containers are turning to a way of packaging up applications and related software for movement over the network or Internet. You can create images by running commands manually and committing the resulting container, but you also can describe them with a Dockerfile. Docker images can be stored on a public repository. Docker is able to create a snapshot. Docker, the company that sponsors the Docker.org open source project, is gaining allies in making its commercially supported Linux container format a de facto standard. Red Hat has woken up to the growth of Linux containers and has begun certifying applications running in the sandboxing tech.

Docker was last week a lot in IT news because Docker 1.0 has been released. Here are links to several articles on Docker:

Docker opens online port for packaging and shipping Linux containers

Docker, Open Source Application Container Platform, Has 1.0 Coming Out Party At Dockercon14

Google Embraces Docker, the Next Big Thing in Cloud Computing

Docker blasts into 1.0, throwing dust onto traditional hypervisors

Automated Testing of Hardware Appliances with Docker

Continuous Integration Using Docker, Maven and Jenkins

Getting Started with Docker

The best way to understand Docker is to try it!

This Docker thing looks interesting. Maybe I should spend some time testing it.

 

340 Comments

  1. Tomi Engdahl says:

    Toby Wolpe / ZDNet:
    Docker publicly launches Machine, Swarm, and Compose tools for orchestrating distributed apps

    Docker Machine, Swarm, Compose: Now the orchestration tools roll out
    http://www.zdnet.com/article/docker-machine-swarm-compose-now-the-orchestration-tools-roll-out/

    Summary:Unveiled late last year in Amsterdam, the Docker orchestration services designed to facilitate multi-container distributed apps can now be downloaded.

    The Docker orchestration services described in December by the company’s CEO as the next 12 months’ most important developments are now available for public use.

    The first downloadable versions of Machine, Swarm – both public betas – and Compose 1.1 are designed to help developers and sysadmins create and manage multi-container distributed applications.

    Docker Machine enables any host, whether a laptop, server, VM or remote cloud instance, to run Docker apps. Docker Swarm is a clustering service that can turn large numbers of servers into a single machine, creating a resource pool for distributed apps.

    The third element of the orchestration services, Docker Compose, is designed to smooth the process of building a complex distributed app from a number of containers.

    “The alpha products we talked about in December were either prototype code or just barely working examples of what was possible,”

    Reply
  2. Tomi Engdahl says:

    Docker grabs SocketPlane and its experts to create open networking APIs
    http://www.zdnet.com/article/docker-grabs-socketplane-and-its-experts-to-create-open-networking-apis/

    Summary:Behind the acquisition of tiny Palo Alto startup SocketPlane by Docker lies the goal of giving multi-container apps network portability through standard interfaces.

    Container company Docker has acquired startup SocketPlane and its six-strong team to help add standard networking interfaces to Docker for increased portability of multi-container distributed apps.

    Since its emergence last year, software-defined networking specialist SocketPlane, acquired on undisclosed terms, has been working on Docker’s open API for networking.

    Docker said the SocketPlane team will be collaborating with the partner community on a set of networking APIs for app developers, and network and system administrators. The goal is to bring networking direct to the application while remaining infrastructure independent.

    “It’s about us grabbing a team that has great networking skills at scale and have them contribute an API back to the community from which partners can then develop their own implementations,”

    “Because we’ve got Cisco, VMware, Juniper, and Arista and all these other networking companies approaching the community with ideas and they have deep networking expertise, we needed a similar deep bench of networking within the project to guide those conversations.”

    The arrival of the new networking APIs is “really close”, according to Johnston.

    “This will allow the user to architect a multi-container app, network those containers and have that be 100 percent portable. We’ve got some of that today in the first generation but this will really make it scalable and portable across racks, across datacenters, across clouds – that’s really what this is going to represent to the user,” he said.

    “This is about enabling networking partners in the ecosystem not competing with them, by bringing on board a team that can really help us and help the community shape a great API.”

    Reply
  3. Tomi Engdahl says:

    Red Hat Strips Down For Docker
    http://hardware.slashdot.org/story/15/03/05/2240245/red-hat-strips-down-for-docker

    Reacting to the surging popularity of the Docker virtualization technology, Red Hat has customized a version of its Linux distribution to run Docker containers. The Red Hat Enterprise Linux 7 Atomic Host strips away all the utilities residing in the stock distribution of Red Hat Enterprise Linux (RHEL) that aren’t needed to run Docker containers.

    Red Hat strips down for Docker
    Red Hat Enterprise Linux 7 Atomic Host contains only the tools needed for developing and running Docker containers
    http://www.computerworld.com.au/article/569690/red-hat-strips-down-docker/

    Removing unneeded components saves on storage space, and reduces the time needed for updating and booting up. It also provides fewer potential entry points for attackers.

    Containers are valuable for organizations in that they cleanly separate the application from the underlying infrastructure, explained Lars Herrmann, Red Hat senior director of product strategy.

    Developers can focus just on the code itself, and not worry about fitting the programs to the supporting operating system and middleware. Organizations benefit from containers because their workloads can be moved around easily, from one cloud provider to another, or from an in-house deployment to a cloud deployment.

    “The operations team can now optimize the infrastructure for reliability, performance, and cost,” Herrmann said.

    Reply
  4. Tomi Engdahl says:

    CoreOS goes native on vSphere and vCloud Air
    Say it again until you remember it: the best container is a container in a virtual machine
    http://www.theregister.co.uk/2015/03/10/coreos_goes_native_on_vsphere_and_vcloud_air/

    Stick with us here, because CoreOS last year announced Rocket, a containerisation play that very deliberately offers an alternative to Docker. That means the VMware didn’t just support a minor Linux distro, it also more or less gave CoreOS and Rocket a ringing endorsement.

    CoreOS is, as the name suggests, a rather stripped back affair intended for use in “modern infrastructure stacks” and unashamedly modelled on the needs and practices of hyperscale operators like Google and Facebook. VMware’s very keen on hyperscale operations, so giving CoreOS to Good Virtualisation Seal of Approval isn’t a startling thing for it to do. Nor is supporting CoreOS in vCloud Air: Virtzilla’s trying to make sure its cloud is fit for all purposes.

    What, then, to make of the fact that VMware also embraced Docker last year? Is the company juggling two suitors?

    Yes, and VMware’s proud of it.

    Reply
  5. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Docker’s latest acquisition, KiteMatic, built a Mac app that runs Docker containers
    http://venturebeat.com/2015/03/12/dockers-latest-acquisition-kitematic-built-a-mac-app-that-runs-docker-containers/

    Docker, a startup whose open-source container technology is becoming a popular component in software development, is announcing today that it’s acquired KiteMatic, a three-man startup that built a Mac app that lets people quickly start working with Docker’s containers for storing application code.

    Terms of the deal weren’t disclosed.

    Docker until this point has promoted an open-source tool called boot2docker for launching Docker on personal computers. But the KiteMatic team goes considerably further.

    “This is kind of a one-stop shop — simple workflow, GUI [graphical user interface]-based — that kind of drives people through the whole process,”

    It might sound like a dinky invention when you consider that Docker clearly intends to play a critical role in the software stack at big companies. But really, KiteMatic could serve a critical function for Docker — getting more people comfortable with Docker’s Linux containers as a supplement or even an alternative to virtual machines.

    Docker has been rapidly making acquisitions to expand its team and its capabilities.

    Reply
  6. Tomi Engdahl says:

    Microsoft Swarms all over Docker Machines
    Embrace, extend, hmm, what’s that last one?
    http://www.theregister.co.uk/2015/03/02/microsoft_swarms_towards_docker_machines/

    Microsoft has expanded its cloudy support for Docker, adding Docker Machine to Azure and Hyper-V, and supporting Docker Swarm.

    With the release of Docker Machine 1.0 Beta, Redmond has blogged that users can create a host under Windows using the lightweight Linux boot2docker.

    Docker Machine is designed for an easy install. As the Docker blog explains, it’s designed to create Docker Engines on whatever target iron you have in mind (your own metal or in the cloud), and configure the client to talk to them. As well as Azure it supports Amazon EC2, DigitalOcean, Google Compute Engine, OpenStack, Rackspace, SoftLayer, VirtualBox, and VMWare Fusion, vCloud Air and vSphere.

    Reply
  7. Tomi Engdahl says:

    Docker: to keep those old apps ticking
    http://www.karan.org/blog/2015/01/16/docker-to-keep-those-old-apps-ticking/

    Got an old, long running app on CentOS-4 that you want to retain ? Running a full blown VM not worth while ? Well, Docker can help.

    Ofcourse, always consider migrating the app to a newer, supported platform like CentOS-6 or 7 before trying these sort of workarounds.

    Docker is available out of the box, by defauly, on all CentOS-7/x86_64 installs.

    Reply
  8. Tomi Engdahl says:

    Joyent Launches Triton, Its New Container Infrastructure For Easier Docker Deployments
    http://techcrunch.com/2015/03/24/joyent-launches-triton-a-new-container-infrastructure-for-easier-docker-deployments/

    Joyent today announced the launch of Triton, its new container infrastructure for making Docker deployments in on-premise clouds and on its own cloud architecture easier. This announcement comes about half a year after the company raised $15 million with the stated goal of releasing exactly this kind of service (after previously raising $120 million).

    Joyent argues that while Docker brings lots of new efficiencies to the development and deployment process, it also introduces new complexities, because you have to manage multiple containers and hosts, and wrangle with networking implementations and security issues.

    Reply
  9. Tomi Engdahl says:

    CoreOS Raises $12M Funding Round Led By Google Ventures To Bring Kubernetes To The Enterprise
    http://techcrunch.com/2015/04/06/coreos-raises-12m-funding-round-led-by-google-ventures-to-bring-kubernetes-to-the-enterprise/

    CoreOS, a Docker-centric Linux distribution for large-scale server deployments, today announced that it has raised a $12 million funding round led by Google Ventures with participation by Kleiner Perkins Caufield & Byers, Fuel Capital and Accel Partners. This new round brings the company’s total funding to $20 million.

    In addition, CoreOS is also launching Tectonic today. This new commercial distribution combines CoreOS with Google’s open source Kubernetes container management and orchestration tools. This makes CoreOS the first company to launch a fully supported enterprise version of Kubernetes. Overall, the new distribution, which for now is only available to a select group of beta users, aims to make it easier for enterprises to move to a distributed and container-based infrastructure.

    “When we started CoreOS, we set out to build and deliver Google’s infrastructure to everyone else,” CoreOS CEO Alex Polvi said in a canned statement. “Today, this goal is becoming a reality with Tectonic, which allows enterprises across the world to securely run containers in a distributed environment, similar to how Google runs their infrastructure internally.”

    Reply
  10. Tomi Engdahl says:

    Amazon EC2 Container Service (Preview)
    http://aws.amazon.com/ecs/

    Amazon EC2 Container Service is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run distributed applications on a managed cluster of Amazon EC2 instances. Amazon EC2 Container Service lets you launch and stop container-enabled applications with simple API calls, allows you to query the state of your cluster from a centralized service, and gives you access to many familiar Amazon EC2 features like security groups, EBS volumes and IAM roles. You can use EC2 Container Service to schedule the placement of containers across your cluster based on your resource needs, isolation policies, and availability requirements. Amazon EC2 Container Service eliminates the need for you to operate your own cluster management and configuration management systems or worry about scaling your management infrastructure.

    There is no additional charge for Amazon EC2 Container Service. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application.

    Reply
  11. Tomi Engdahl says:

    Microsoft Creates a Docker-Like Container For Windows
    http://tech.slashdot.org/story/15/04/09/014258/microsoft-creates-a-docker-like-container-for-windows

    Hoping to build on the success of Docker-based Linux containers, Microsoft has developed a container technology to run on its Windows Server operating system.

    Microsoft creates a container for Windows
    http://www.computerworld.com.au/article/572213/microsoft-creates-container-windows/

    Windows Server Containers will be able to run applications specifically built for Windows Server and .Net

    Hoping to build on the success of Docker-based Linux containers, Microsoft has developed a container technology to run on its Windows Server operating system.

    “We’re finding that interest in containers is very high,” said Mike Schutz, who runs cloud platform product marketing for Microsoft. Twenty percent of Azure users deploy Linux and a significant number of those users run Docker containers, he said.

    The Windows Server Container can be used to package an application so it can be easily moved across different servers. It uses a similar approach to Docker’s, in that all the containers running on a single server all share the same operating system kernel, making them smaller and more responsive than standard virtual machines.

    Unlike Docker, which uses Linux as its core operating system, Windows Server Container will rely on the Windows Server operating system. This will allow organizations to package into containers their applications specifically built to run on Windows Server, and Microsoft’s .Net framework.

    Reply
  12. Tomi Engdahl says:

    Docker preps on-prem container store, hungry investors give it the D
    We don’t even need the money, laughs CEO as Valley sugar daddies shovel more cash
    http://www.theregister.co.uk/2015/04/14/docker_series_d_funding/

    Linux container-wrangling outfit Docker, flush with cash from a fresh round of funding, says it will release its first commercial product to general availability this quarter.

    The software, an on-premises version of the firm’s hosted container repository called Docker Hub Enterprise (DHE), was launched as a private beta program around the DockerCon EU conference in Amsterdam in December.

    On Tuesday, Docker CEO Ben Golub said in a blog post that 15 companies are participating in that beta, half of which are in the Fortune 50, and that 100 companies expressed interest in DHE when it was announced.

    “These organizations – and an increasing number of mainstream enterprises – are providing validation that commercial management solutions and commercial support will form a sustainable first revenue stream for Docker,” Golub said.

    Docker had not previously committed to a time frame for when it would have a product in general availability.

    Golub said more than 10,000 organizations are now using Docker Hub, the hosted version of its software, and that the repository is now home to more than 100,000 containerized applications. Over 300 million container images have now been downloaded from the service, he said, up from 100 million at the beginning of January.

    Reply
  13. Tomi Engdahl says:

    Disrupting the Disruptor: Security of Docker Containers
    http://www.securityweek.com/disrupting-disruptor-security-docker-containers

    Docker Security: How Secure are Containers and Will Security be a Hurdle to Container Adoption?

    In the digital age, we have brought forward similar primitives into our computing clouds: virtual versions of desktop operating systems from the 90s: Windows, BSD and Linux. It’s bizarre because these bulky, inefficient virtual guest operating systems are just supporting apparatus for an application.

    But now a form of virtualization called containers may obsolete virtual operating systems. Containers are host processes that have advanced support for multi-tenancy and privilege isolation. Applications can run inside a container more efficiently than inside a whole virtual operating system.

    And just as VMware rode the wave of operating system virtualization to fame and fortune, there’s a new company named Docker riding the popularity of containers. Docker is fast becoming synonymous with container technology and as a result is the new open-source debutante that everyone wants to date.

    So will containers replace traditional operating system virtualization in the same way that virtualization has replaced much of the physical, bare-metal world? And how secure are containers, anyway? Will that be a stumbling block to container adoption?

    A recent Gartner analysis of Docker security largely gives Docker security a thumbs up (while noting shortcomings in management and maturity).

    The Gartner analysis for Docker security reiterates some of the main points from Docker’s own security page.

    • Virtualization security has migrated into the host operating system. Linux and Microsoft kernels have been providing more support for virtualization in every release. The LXC (Linux container) and userspace file systems secure the containers at the host operating system level. This helps traditional virtualization as well and enables containers to focus on efficiency.

    • A container system has a smaller threat surface than the traditional virtualization system. Because containers consolidate redundant shared resources, there will be fewer versions of Apache (and its entire mod ecosystem) to attack, and fewer processes to manage. A smaller attack surface is always a good thing.

    • Process security controls will be applied to containers. Process security is an ancient black art: easy to misconfigure, often disabled, and it often doesn’t do what you think it should. But the underlying technology should only get better.

    On a fundamental level, container security is equivalent to hypervisor security.

    Sure, Docker is not as mature as VMware, but that’s just one parameter in your equation—as container security matures, the reduced threat surface may lead to fewer vulnerabilities than full virtual machines.

    Docker is already supported by the major cloud infrastructures: Google, Amazon Web Services, IBM, and now Microsoft. The promise of container efficiency is leading some to predict that containers will eventually replace traditional virtualization systems. The ability to spin up containers in a second or less means they will proliferate to deliver their value and then disappear, allowing the underlying operating system to boost the efficiency of the application’s circulatory system.

    Reply
  14. Tomi Engdahl says:

    Red Hat seeks cloud critical mass with Atomic Host
    RHEL’s container-friendly cousin hits general availability
    http://www.theregister.co.uk/2015/03/05/rhel_7_atomic_host/

    Red Hat says its stripped-down Linux variant for containerized cloud deployments is ready to roll, giving Red Hat customers a simplified, easy-to-manage platform for hosting Docker containers.

    Red Hat Enterprise Linux 7 Atomic Host was first announced at the Red Hat Summit in San Francisco last April and it has been in beta testing since November.

    “Specifically designed to run Linux containers, Red Hat Enterprise Linux Atomic Host delivers only the operating system components required to run a containerized application, reducing overhead and simplifying maintenance,” the company said in a press release.

    Shadowman isn’t the first Linux vendor to latch onto this idea. That credit should probably go to CoreOS, which declared its micro-sized, container-centric Linux distro a production-ready product in June. More recently, Canonical also released a small-footprint variant of its own distro in the form of “Snappy” Ubuntu Core.

    What these and similar distros share in common is that they do away with the “everything and the kitchen sink” approach of old, opting instead to provide the bare minimum of components necessary to get the system up and running. From there, admins add only the applications and services they want, which come packaged as containers.

    Reply
  15. Tomi Engdahl says:

    Engine Yard Pivots Toward Container Management
    http://www.eetimes.com/document.asp?doc_id=1326386&

    New Engine Yard CEO Beau Vrolyk is moving the company toward managing Linux containers for developers, with the acquisition of OpDemand.

    Engine Yard, one of the first platform-as-a-service firms, is pivoting toward Linux container management as it adjusts to the impact of containers on enterprise IT development and operations.

    Engine Yard has acquired OpDemand, the firm founded by Gabriel Monroy and Joshua Schnell, who were also founders of the OpenDeis Project. The young, open-source project offers a deployment and management system for Docker containers in a horizontally-scalable manner. That allows a customer-facing application to scale up or down, as needed, on a cluster of x86 servers.

    Engine Yard Pivots Toward Container Management
    http://www.informationweek.com/cloud/platform-as-a-service/engine-yard-pivots-toward-container-management/d/d-id/1319981?

    Engine Yard already has experience in orchestrating and scheduling complex workloads, and in building their database connections in a cloud setting — skills that are a close match for what’s needed in the emerging world of container orchestration.

    “Deis will work with a cluster scheduling tool like (the Apache Software Foundation’s) Mesos or Google Kubernetes,” note Vrolyk. By working with a scheduler, the Deis platform will be able to see when demand is catching up with a running application and scale it out by initializing additional containers. As demand falls, it can scale them back.

    Reply
  16. Tomi Engdahl says:

    Docker huddles under Linux patent-troll protection umbrella
    OIN prepares to repel tedious fools’ legal droplets
    http://www.theregister.co.uk/2015/04/21/oin_docker_linux_protection/

    Docker has joined an open-source and Linux umbrella that provides shelter against possible patent trolls.

    The Linux container, finding favour in the cloud as a foundation of microservices, joins 115 packages protected by the Open Invention Network (OIN).

    Joining Docker in the OIN shelter are Puppet, Ceph, the full LibreOffice collaboration suite and the Debian APT packaging tool.

    Docker and co haven’t been on the wrong end of any patent-troll calls as yet, but OIN chief executive Keith Bergelt reckoned threats could come with greater use.

    For “greater use” read “as more features are added and the code is increasingly deployed.”

    “Attacks go hand in hand with success,” Bergelt said. “The more attention you get there’s more potential for infringement. These companies [patent trolls] are like flies around the flypaper – they are attracted to opportunity.”

    Docker is, of course, a Linux container: OIN was founded in 2005 by IBM, Novell, Red Hat and Sony as a patent cross-licensing and non-prosecution project.

    Reply
  17. Tomi Engdahl says:

    Yevgeniy Sverdlik / Data Center Knowledge:
    Docker-competitor CoreOS spins off its App Container project into a separate foundation with Google, VMware, Red Hat, and Apcera support

    CoreOS Gives Up Control of Non-Docker Linux Container Standard
    http://www.datacenterknowledge.com/archives/2015/05/04/coreos-gives-up-control-of-non-docker-linux-container-standard/

    Taking a major step forward in its quest to drive a Linux container standard that’s not created and controlled by Docker or any other company, CoreOS spun off management of its App Container project into a stand-alone foundation. Google, VMware, Red Hat, and Apcera have announced support for the standard.

    Becoming a more formalized open source project, the App Container (appc) community now has a governance policy and has added a trio of top software engineers that work on infrastructure at Google, Twitter, and Red Hat as “community maintainers.”

    Reply
  18. Tomi Engdahl says:

    Kali Linux launches for Docker
    Hacker whacker comes to server fervor
    http://www.theregister.co.uk/2015/05/27/kali_linux_launches_for_docker/

    Penetration testing gurus Offensive Security have made their popular Kali operating system available for Docker-addicted system administrators.

    Developer Mati Aharoni acted on a request from a user who asked for a Dockerised image of the Kali penetration testing system platform.

    “Last week we received an email from a fellow penetration tester, requesting official Kali Linux Docker images that he could use for his work,” Aharoni says.

    “The beauty [of Docker] is that Kali is placed in a nice, neat container without polluting your guest filesystem.

    The hackers bootstrapped a minimal Kali Linux 1.1.0a base under its Docker account providing security bods with access to the platform’s top 10 tools.

    Official Kali Linux Docker Images
    https://www.kali.org/news/official-kali-linux-docker-images/

    Reply
  19. Tomi Engdahl says:

    Docker ascendancy’s ignites a flak-in-the-box cloud arms race
    Web lessons in bullet-proofing the container class
    http://www.theregister.co.uk/2015/05/01/docker_rocket_everybody_wins/

    Containerisation has taken the data centre by storm. Led by Docker, a start-up that’s on a mission to make development and deployment as simple as it should be, Linux containers are fast changing the way developers work and devops teams deploy.

    Containerisation is such a powerful idea that it’s only slightly hyperbolic to suggest that the future of servers will not include operating systems as we think of them today.

    To be sure it’s still a ways off, but containerisation is likely to completely replace traditional operating systems – whether Linux, Windows, Solaris or FreeBSD – on servers. Instead, servers will consist of simple, single-user installs of hypervisors optimised for the specific hardware. Atop that bare-metal layer will be the containers full of applications.

    Like many things to come out of Linux, containerisation is not new – in fact, the tools have been part of the kernel since 2008.

    Docker is not the only containerisation tool out there, but it is currently leading the pack in both mind share and actual use. Google, Amazon and even Microsoft have been tripping over themselves to make sure their clouds offer full Docker integration. Google has even open-sourced its own Docker management tool.

    Reply
  20. Tomi Engdahl says:

    The server can just stack Docker images up without ever worrying about what’s inside them.

    Another way to think of a container is that it’s a virtual machine without the operating system. It’s a container that holds applications and all their prerequisites in a self-contained unit, hence the name. That container can be moved from one machine to another, or from virtual to dedicated hardware, or from a Fedora installation to Ubuntu and it all just works.

    Or at least that’s the latest wave of the “write-once-run-anywhere” dream that Docker has been riding to fame for the past two years. The reality, of course, is a little different.

    Imagine if you could fire up a new virtual environment on your Linux laptop, write an application in Python 3 and then send it to your co-worker without needing to worry about that fact that she’s running Windows and only has Python 2 installed. If you send her your work as part of a container, then Python 3 and all the elements necessary to recreate the environment you were working in come with your app. All she has to do is download it and run it using Docker’s API interface.

    Then, after your co-worker finishes up the app you can pull in her changes and send the whole thing up to your company’s AWS EC2 server, again not worrying about the OS or environment particulars other than you know Docker is installed.

    But there’s the rub – your app is now tied to Docker, which in turn means the future of your app is tied to the future of Docker.

    What’s perhaps most interesting about Docker and its competitors is that in every case, from Canonical to Google, there’s a very clear message: the future of deployment is in containers. The future of development and deployment, and especially the so-called cloud hosting market, will be containers.

    Source: http://www.theregister.co.uk/2015/05/01/docker_rocket_everybody_wins/?page=2

    Reply
  21. Tomi Engdahl says:

    Docker death blow to PaaS? The fat lady isn’t singing just yet folks
    Could they work together? Yeah, why not
    http://www.theregister.co.uk/2015/06/01/did_docker_kill_paas/

    Logically nestled just above Infrastructure-as-a-Service and just beneath the Software-as-a-Service applications it seeks to support, we find Platform-as-a-Service (PaaS).

    As you would hope from any notion of a platform, PaaS comes with all the operating system, middleware, storage and networking intelligence we would want — but all done for the cloud.

    However, as good as it sounds, critics say PaaS has failed to deliver in the practical terms. PaaS offers a route to higher level programming with built-in functions spanning everything from application instrumentation to a degree of versioning awareness. So what’s not to like?

    Does PaaS offer to reduce complexity via abstraction so much that it fails through lack of fine-grain controls? Is PaaS so inherently focused on trying to emulate virtualised hardware it comes off too heavy on resource usage? Is PaaS just too passé in the face of Docker?

    Proponents of Docker say this highly popularised (let’s not deny it) containerisation technology is not just a passing fad and that its lighter-weight approach to handling still-emerging microservices will ensure its longer-term dominance over PaaS.

    Dockerites (we’ll call them that) advocate Docker’s additional level of abstraction that allows it to share cloud-based operating systems, or more accurately, system singular.

    This light resource requirement means the Docker engine can sit on top of a single instance of Linux, rather than a whole guest operating system for each virtual machine, as seen in PaaS.

    There’s great efficiency here if we do things “right” – in other words, a well-tuned Docker container shipload can, in theory, run more application instances on the same amount of base cloud data centre hardware.

    Ah, but is it all good news? Docker has management tool issues according to naysayers. Plus, Docker is capable of practically breaking monitoring systems, so say the IT monitoring tools companies. But then they would say that wouldn’t they?

    The big question is: does Docker isolation granularity and resource consolidation utilisation come at the expense of management tool-ability? “Yes it might do, in some deployment scenarios,” is probably the most sensible answer here.

    “In the case of PaaS you don’t have much control over many of the operational aspects associated with managing your application, for example the way it handles scaling, high availability, performance, monitoring, logging, updates. There is also a much stronger dependency on the platform provider in the choice of language and stack,” said Nati Shalom, CTO and founder of cloud middleware company GigaSpace.

    So does Docker effectively replace PaaS or does Docker just drive the development of a new kind of PaaS with more container empathy and greater application agnosticism?

    PaaS has been criticised for forcing an “opinionated architecture” down on the way cloud applications are packaged, deployed and managed. Surely we should just use Docker, but with an appropriate level of orchestration control too right? It’s not going to be that simple is it?

    “Yes, it can be that simple,” argues Brent Smithurst, vice president of product at cross-platform development tools company ActiveState.

    “Containers, including Docker, are an essential building block of PaaS. Also, PaaS offers additional benefits beyond application packaging and deployment, including service provisioning and binding, application monitoring and logging, automatic scaling, versioning and rollbacks, and provisioning across cloud availability zones,” he added.

    Reply
  22. Tomi Engdahl says:

    Not in front of the CIO: grassroots drive Linux container adoption
    It’s indifferent at the top
    http://www.theregister.co.uk/2015/06/22/linux_containers_enterprise_adoption/

    Here’s a thing. CIOs don’t care about vast swathes of technology in their organisations. They have people to do that.

    While they make speeches at fancy conferences about being agile / compliant / regulated / on top of the suppliers / skilling up the workers bees, those worker bees are handling the next Windows refresh.

    Sometimes, technology can be adopted at a grassroots level without ever troubling the upper echelons. Linux containerisation may be a good example.

    A survey of 381 IT decision makers and professionals commissioned by Red Hat, published on June 22, 2015, show that nearly all are planning container development on the Linux operating system.

    However, upper management and CIO directives play limited roles in containerised application adoption in the enterprise, respondents say. Internal champions are the grassroots IT implementers (39 per cent) and middle managers (36 per cent).

    Reply
  23. Tomi Engdahl says:

    Docker-ed vessel Portworx takes three Ocarina folk aboard
    Container-aware storage startup unveils boxy software product
    http://www.theregister.co.uk/2015/06/22/three_ocarina_folk_get_containerised_in_portworx/

    Reply
  24. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Google launches Container Engine in beta, makes Container Registry generally available
    http://venturebeat.com/2015/06/22/google-launches-container-engine-in-beta-makes-container-registry-generally-available/

    Google today broadened the availability of a couple of its cloud services for working with applications packaged up in containers. The Google Container Engine for deploying and managing containers on Google’s cloud infrastructure, until now available in alpha, is now in beta. And the Google Container Registry for privately storing Docker container images, previously in beta, is now generally available.

    Google has made a few tweaks to Container Engine, which relies on the Google-led Kubernetes open-source container management software, which can deploy containers onto multiple public clouds. For one thing, now Google will only update the version of Kubernetes running inside of Container Engine when you run a command. And you can turn on Google Cloud Logging to track the activity of a cluster “with a single checkbox,” Google product manager Eric Han wrote in a blog post on the news.

    Google has repeatedly pointed out that for years it has run internal applications inside containers, rather than more traditional virtual machines. And while Kubernetes runs just fine on any infrastructure, Google cloud executive Craig McLuckie last year told VentureBeat that “it works extremely well on the Google Cloud Platform.”

    The big picture here is that Google aspires to become even more of a player in the public cloud market than it is now. Solid tools for storing images and deploying apps in containers can help Google in this regard.

    Meanwhile, other leading cloud providers, such as Microsoft, IBM, and Amazon Web Services, have been executing on their container strategies too.

    Reply
  25. Tomi Engdahl says:

    Sean Michael Kerner / eWeek:
    Docker, CoreOS unite with Google, Amazon, Microsoft, VMware, others to define container standards under new Open Container Project, backed by Linux Foundation — Docker Rivals Join Together in Open Container Effort — The open-source container community is uniting today …

    Docker Rivals Join Together in Open Container Effort
    http://www.eweek.com/enterprise-apps/docker-rivals-join-together-in-open-container-effort.html

    The Linux Foundation is now home to new Open Container Project, bringing Docker, CoreOS and others together to advance open-source containers for all.
    The open-source container community is uniting today with the new Open Container Project (OCP), which is backed by the Linux Foundation. The OCP ends months of speculation and debate in the Docker community about container specifications and unite the biggest backers of containers behind a common purpose.

    In December 2014, the nascent community around Docker fractured, with CoreOS launching its own container technology and the appc (App Container Image) effort to define a standard for containers. Now with the Open Container Project, the goal is to mend fences and find common ground to define a base specification for containers that will work across Docker, CoreOS and any other OCP-based container technology.

    To be clear though, the point of the OCP is not to standardize Docker, but rather to standardize the baseline for containers

    Today, Rocket is a command line tool for running app containers, but in the future it will become a tool for running OCP containers, according to Polvi.

    “Users will have the choice to choose the best technology for their jobs,” Polvi said. “As vendors, we need to differentiate and demonstrate unique value to justify the tools that we’re providing.”

    From an interoperability perspective, there is already an installed base of Docker container and Rocket container users today. Golub noted that the initial milestone for the OCP is to build a common format that both Docker and Rocket can convert into.

    Reply
  26. Tomi Engdahl says:

    5 must-watch Docker videos
    http://opensource.com/business/15/5/must-watch-docker-videos

    When you’re interested in learning a new technology, sometimes the best way is to watch it in action—or at the very least, to have someone explain it one-on-one. Unfortunately, we don’t all have a personal technology coach for every new thing out there, so we turn to the next best thing: a great video.

    With that in mind, if you’re interested in Docker (and these days, who isn’t?), here are five great videos to get you started with the basics you should know.

    Reply
  27. Tomi Engdahl says:

    Red Hat: PaaS or IaaS, everything’s about CONTAINERS now
    New private cloud offerings go all-in for Docker and Kubernetes
    http://www.theregister.co.uk/2015/06/25/red_hat_goes_container_crazy/

    Red Hat Summit Docker wasn’t the only firm blabbing away about containers this week. On Wednesday, top Linux vendor Red Hat unveiled two new offerings at its Red Hat Summit conference in Boston, and both had containers at their cores.

    The first of these was OpenShift Enterprise 3, the latest version of Shadowman’s locally deployable platform-as-a-service (PaaS) offering for building private clouds.

    “As a leading contributor to both the Docker and Kubernetes open source projects, Red Hat is not just adopting these technologies but actively building them upstream in the community,” the Linux maker said in a canned release.

    It had also better hope that OpenShift’s having found container religion makes it more attractive to customers.

    As with previous versions of Red Hat’s PaaS, this third edition piles layers of tools on top of this foundation, including its source-to-container build technology that can pull source code from a Git repository and deliver a Docker containerized final product.

    Also included are various middleware services from Red Hat’s JBoss line, including its version of the Tomcat application server and the JBoss A-MQ message queue.

    Reply
  28. Tomi Engdahl says:

    Docker shocker: It’s got a commercial product, and is ready to SELL IT
    Look, ma, we’re a real business now
    http://www.theregister.co.uk/2015/06/24/docker_commercial_offering/

    DockerCon 2015 The themes of Docker’s past conferences has been increasing adoption of the container tech, but the theme of this year’s DockerCon was moving beyond experimentation and into production deployment.

    By extension, it was also about how Docker plans to make money.

    The startup is well-heeled, having received enough venture cash in successive funding rounds to reach an estimated valuation of $1bn. CEO Ben Golub has said the firm has more money than it even needs right now. But those backers are going to want to see return on their investments somehow.

    Customers already pay Docker to host private image repositories on Docker Hub. But the subscriptions are cheap and the initial plans lacked features that enterprise customers demand, like granular access controls and the ability to integrate with their existing authentication systems.

    To that end, Scott Johnson, Docker’s senior veep of product management, took the DockerCon stage on Tuesday to announce Docker’s first commercial offering for businesses.

    The first piece of its push is Docker Trusted Registry, a beefed-up, on-premise version of the open source Docker Registry that was first announced with an alpha release at DockerCon Europe in December 2014 (only back then it was being called Docker Hub Enterprise).

    “In large organizations, compliance becomes a very important requirement, a very important feature. And so, logging all events and all accesses by all the users on the platform became very, very important,”

    Docker doesn’t plan to sell Trusted Registry on its own, though. Instead, it will form a part of subscription solutions offerings in a variety of tiers, ranging from an inexpensive starter package for workgroups to large enterprise contracts.

    Each subscription includes Docker Engines that have been certified against production operating systems, which currently include Red Hat Enterprise Linux 7.0 and 7.1 and Ubuntu 14.04, with SELinux and AppArmor enabled, respectively. Docker will offer full support for the engines for 12 months after release, including patches and hotfixes.

    Subscribers then have the option of managing their containers either by installing Docker Trusted Registry in their own data centers or by choosing to have Docker host their images in a commercial SaaS scenario.

    The last piece of the puzzle is commercial support for the whole stack, which is priced based on both the number of Docker Engines supported and the level of support required. Naturally, 24/7 phone support costs more than “when we can get to it” support via email.

    You’ll need to talk to a sales rep to find out just what your configuration might cost, but Johnson did name one figure. The Starter level subscription – which includes a single Docker Trusted Registry, 10 certified Docker Engines, and email support – can be had for just $150 per month.

    Reply
  29. Tomi Engdahl says:

    Docker and Microsoft unite Windows and Linux in the cloud
    Redmond demos cross-platform containerized apps
    http://www.theregister.co.uk/2015/06/24/microsoft_dockercon_demos/

    DockerCon 2015 Microsoft has doubled down on its support for Docker, further integrating the software container tech with Azure and Visual Studio Online and demoing the first-ever containerized application spanning both Windows and Linux systems.

    The software giant first showed off its support for Docker on its Azure cloud at the DockerCon conference in June 2014. Then in October it said it would introduce Docker-compatible containers for Windows in the next version of Windows Server.

    Then, during Tuesday morning’s DockerCon keynote, Microsoft Azure CTO Mark Russinovich went one step further by giving a demo of a containerized application where some of the code ran on Linux and some on Windows Server.

    Cleverly, he pushed a container with the ASP.Net portion of the code to the Linux server, while the Windows host ran a container with the Node.js portion. Ordinarily you might expect it to work the other way around.

    Docker, Docker everywhere

    Behind the scenes, Docker’s own orchestration tools – including Docker Compose and Docker Swarm – handled the grunt work for both operating systems. But Russinovich didn’t need to muck about with the command line, thanks to the newly implemented integration of Docker’s tools with Visual Studio.

    Russinovich first demonstrated how the IntelliSense feature of Redmond’s free, cross-platform Visual Studio Code editor worked with Docker container configuration files. For example, it detected when he was typing the name of an image file and IntelliSense’s code completion feature automatically pulled in a list of possible matches direct from Docker Hub.

    Support for publishing projects to Docker hosts is coming to the full-fat Visual Studio IDE, Russinovich said. But for his demo, he instead used Visual Studio Code to upload his project to Visual Studio Online, which also now includes Docker integration.

    The upload automatically triggered a series of continuous integration (CI) steps, Russinovich said, such as building Docker images, running containerized unit tests, pushing the images to Docker Hub, creating a Docker Swarm cluster on a collection of Azure VMs, and finally pushing the composed, multi-container application to the cluster.

    It’s still early days yet for Microsoft and containers, but Redmond is clearly all-in on the concept. The proof? In addition to everything he demoed onstage at DockerCon, Russinovich had another bombshell to drop. Since the beginning of May, he said, the Number One contributor to the open source Docker code base has been Microsoft.

    Reply
  30. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    As Kubernetes Hits 1.0, Google Donates Technology To Newly Formed Cloud Native Computing Foundation

    As Kubernetes Hits 1.0, Google Donates Technology To Newly Formed Cloud Native Computing Foundation
    http://techcrunch.com/2015/07/21/as-kubernetes-hits-1-0-google-donates-technology-to-newly-formed-cloud-native-computing-foundation-with-ibm-intel-twitter-and-others/

    Kubernetes, the open-source container management tool Google launched last February, hit version 1.0 today. With this update, Google now considers Kubernetes ready for production. What’s more important, though, Google is also ceding control over Kubernetes and is donating it to a newly formed foundation — the Cloud Native Computing Foundation (CNCF) that will be run by the Linux Foundation. Other partners in the new foundation include AT&T, Box, Cisco, Cloud Foundry Foundation, CoreOS, Cycle Computing, Docker, eBay, Goldman Sachs, Huawei, IBM, Intel, Joyent, Kismatic, Mesosphere, Red Hat, Switch SUPERNAP, Twitter, Univa, VMware and Weaveworks.

    Reply
  31. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Open Container Initiative Gains Momentum As AT&T, Oracle, Twitter And Others Join — A month ago, Docker and the Linux Foundation announced the Open Container Project at Docker’s developer conference. Now called the Open Container Initiative, the project is seeing rapid growth

    Open Container Initiative Gains Momentum As AT&T, Oracle, Twitter And Others Join
    http://techcrunch.com/2015/07/22/open-container-initiative-gains-momentum-as-att-oracle-twitter-and-others-join/

    A month ago, Docker and the Linux Foundation announced the Open Container Project at Docker’s developer conference. Now called the Open Container Initiative, the project is seeing rapid growth, as the Linux Foundation’s executive director Jim Zemlin announced at OSCON this morning. Not only have 14 new companies signed up for the project, but as Zemlin announced today, the OCI now also sports a draft charter.

    The new partners backing the initiative, which will shepherd the future of the Docker container spec going forward in order to establish a common standard around containers, include AT&T, ClusterHQ, Datera, Kismatic, Kyup, Midokura, Nutanix, Oracle, Polyverse, Resin.io, Sysdig, SUSE, Twitter and Verizon (TechCrunch’s corporate overlords). They are joining founding members like Amazon, Microsoft, CoreOS, Docker, Intel, Mesosphere, Red Hat and others.

    Reply
  32. Tomi Engdahl says:

    New Docker crypto locker is a blocker for Docker image mockers
    Verison 1.8 adds container signing to prevent man-in-middle attacks
    http://www.theregister.co.uk/2015/08/13/docker_content_trust/

    Docker has tackled the problem of secure application container distribution with a new system that supports signing container images using public key cryptography.

    The new feature, known as Docker Content Trust, is the main attraction of Docker 1.8, the latest version of the tool suite that was announced on Wednesday.

    “Before a publisher pushes an image to a remote registry, Docker Engine signs the image locally with the publisher’s private key,” Docker security boss Diogo Mónica said in a blog post outlining the process. “When you later pull this image, Docker Engine uses the publisher’s public key to verify that the image you are about to run is exactly what the publisher created, has not been tampered with and is up to date.”

    Docker is basing its code-signing capabilities on Notary, a standalone piece of software that it first unveiled at the DockerCon 2015 conference in June. Notary, in turn, is based on The Update Framework (TUF), a project that offers both a specification and a code library for generic software update systems.

    At DockerCon, Docker CTO Solomon Hykes explained that he likes the TUF design because it not only offers protection against content forgery and various forms of man-in-the-middle attacks, but it also offers what the TUF project calls “survivable key compromise.”

    “Basically it means if one of the keys in the system gets lost or stolen, you’re in trouble, but you’re not completely, impossibly screwed,” Hykes said. “It means you can apply regular policies to deal with the issue, depending on the magnitude, instead of going out of business.”

    Reply
  33. Tomi Engdahl says:

    Intel and CoreOS add hardware virty support to rkt containers
    Is it containers? Is it virtualization? It’s both
    http://www.theregister.co.uk/2015/08/19/rkt_gets_vtx_virtualization_support/

    Intel and CoreOS have teamed up to produce an application container runtime that supports hardware enhanced virtualization.

    Version 0.8.0 of CoreOS’s rkt (pronounced “rocket”) container runtime was announced at the LinuxCon/CloudOpen/ContainerCon conference taking place this week in Seattle.

    Among the main features of the new release is support for Intel’s VT-x in-silicon virtualization technology. Intel first demonstrated the unorthodox container tech in May as part of its Clear Linux Project, dubbing it Clear Containers.

    Unlike the default rkt runtime engine, which fires up containers using Linux kernel–based sandboxing technologies including cgroups and namespapces, Intel’s contribution launches container images as full KVM virtual machines.

    It’s an approach that uses more system resources than typical Linux containers but offers the enhanced security of a hypervisor. Plus, Intel says its on-chip virty extensions minimize the performance overhead.

    “By optimizing the heck out of the Linux boot process, we have shown that Linux can boot with the security normally associated with virtual machines, almost as quickly as a traditional container,”

    Reply
  34. Tomi Engdahl says:

    Build a “Virtual SuperComputer” with Process Virtualization
    http://www.linuxjournal.com/content/build-virtual-supercomputer-process-virtualization?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    Build and release is a complicated process. I really don’t need to tell anyone that…but I did anyway. But a rapid, precise feedback cycle that notifies whether the latest build of software passed successfully or not, and one that tells you WHAT failed specifically instead of just “operation failed,” can mean the difference between being able to quickly fix defects, or not discovering those defects until later in the project’s lifecycle, thereby increasing exponentially the cost and difficulty of fixing them. IncrediBuild has been addressing precisely this with Microsoft TFS for years, and is now reinventing their tool for Linux developers.

    The secret, according to Eyal Maor, IncrediBuild’s CEO, is what they call Process Virtualization. In a nutshell, Process Virtualization helps to speed up the build process by allowing the build machine to utilize all the cores in ALL the machines across the network. Maor says that in most cases, “cycles can be reduced from 20 minutes to under 2 minutes, and there are many more cycles available.”

    For Linux Journal readers, perhaps the most interesting part of this is how it works.

    So obviously, operations need to be done in parallel, but the devil is in the details for parallel processing. Remember that, if improving speed by more efficient usage of multi-core CPUs is the goal, attention should be focused on parallelizing CPU-bound operations, not I/O-bound operations. Beyond that, there are a couple of possible parallel processing solutions:

    Clustering – Essentially parallelizing the build process workflow level, running one complete workflow on each agent, so that multiple build requests can be handled in parallel.
    HPC – Computers (physical or virtual) that can aggregate computing power in a way that delivers much higher performance than that of an individual desktop computer or workstation. HPCs make use of many CPU cores in order to accomplish this. Ubuntu-based machines, for example, can support up to 256 cores.

    While either of these solutions provides opportunity to speed up processes, both have limitations and drawbacks.

    IncrediBuild, on the other hand, transforms any build machine into a virtual supercomputer by allowing it to harness idle CPU cycles from remote machines across the network even while they’re in use. No changes are needed to source code, no additional hardware is required, no dedicated cluster is necessary and it works on both local and WAN so one can scale limitlessly into the cloud. It does not disturb the normal performance of existing operations on the remote machine, instead making use of idle CPU cycles on any cores on the network.

    Process virtualization consists of two actors – the initiator machine and the helper machines. The initiator distributes computational processes over the network as if the remote machines’ cores were its own, while the processes executed on the helpers

    IncrediBuild is a product of Xoreax Software which launched in 2002 and has since become the de facto standard solution for code-build acceleration.

    Reply
  35. Tomi Engdahl says:

    Oracle: Docker container tech will be in the Zone on Solaris
    Larry’s Unix to support cross-platform containers with unique Solaris features
    http://www.theregister.co.uk/2015/07/30/docker_container_support_on_solaris/

    Oracle is the latest company to get on the Docker bandwagon, having announced support for the application container technology to come in a future version of Solaris Unix.

    Docker arose out of the Linux world, and its original implementation takes advantage of a number of Linux kernel features, including LXC, cgroups, and namespaces.

    Solaris, meanwhile, has had native support for containers since 2005, in the form of Solaris Zones. Rather than aping how Docker handles containers on Linux, Oracle plans to stick with this arguably superior technology.

    What native Docker support will bring to Solaris, however, is the ability to use Docker APIs and Docker-compatible tools to package, instantiate, manage, and orchestrate containers while retaining the heightened security and other technical advantages of Zones.

    “Today’s announcement really gives developers the best of both worlds – access to Oracle Solaris’ enterprise class security, resource isolation and superior analytics with the ability to easily create containers in dev/test, production and cloud environments,”

    Reply
  36. Tomi Engdahl says:

    Coho Data containering in the dock – now with added Google, Splunk
    These are not the Hyper-Converged Infrastructure Appliances you’re looking for
    http://www.theregister.co.uk/2015/08/24/coho_data_containering_in_the_dock/

    Coho Data storage arrays will be able to run Docker containers directly on the storage nodes and use Google’s Kubernetes interface for configuring and deploying microservices.

    Startup Coho Data says its customers can now run new data-centric services and apps directly adjacent to stored data or, putting it another way, “allow third-party applications to run directly within its customers’ enterprise storage systems”.

    It’s trying very hard to say that its scale-out, hybrid flash/disk and all-flash MicroArrays are not HCIAs (Hyper-Converged Infrastructure Appliances) like those of Nutanix, SimpliVity and their colleagues. We’re told Coho’s arrays only do closely-coupled storage/compute work, such as video stream transcoding. Cynics might suspect that’s because they only have poky little CPUs.

    “Containers provide an opportunity to incorporate third-party logic for enhanced data protection, including back-up agents, malware scanners and e-discovery and audit tools directly within the platform.”

    Reply
  37. Tomi Engdahl says:

    The container-cloud myth: We’re not in Legoland anymore
    Why interconnectivity in the cloud is tougher than just stacking bricks
    http://www.theregister.co.uk/2015/08/25/container_cloud_legoland/

    Everything is being decoupled, disaggregated and deconstructed. Cloud computing is breaking apart our notions of desktop and server, mobile is decoupling the accepted concept of wired technology, and virtualisation is deconstructing our understanding of what a network was supposed to be.

    Inside this maelstrom of disconnection, we find this thing we are supposed to call cloud migration. Methods, tools and protocols that guarantee they will take us into the new world of virtualisation, which promises “seamless” migration and robust results.

    It turns out that taking traditional on-premises application structures into hosted virtualised worlds is way more complex than was first imagined.

    Questions of application memory and storage allocation are fundamentally different in cloud environments. Attention must be paid to application Input/Output (I/O) and transactional throughput. The location of your compute engine matters a lot more if data can be on-premises private, hybrid or public cloud located – or, heaven forbid, some combination of the three.

    Essentially, the parameters that govern at every level and layer of IT can take on a different shape. The primordial spirit of IT has changed: decoupling creates a new beast altogether.

    In Legoland (the concept, not the theme park), objects can be built, disassembled and then rebuilt into other things or even combined into other objects. The concept of meshed interlocking connectivity in Lego is near perfect. Or at least it is in the basic bricks and blocks model until the accoutrements come along.

    Reply
  38. Tomi Engdahl says:

    Concerning Containers’ Connections: on Docker Networking
    http://www.linuxjournal.com/content/concerning-containers-connections-docker-networking

    Containers can be considered the third wave in service provision after physical boxes (the first wave) and virtual machines (the second wave). Instead of working with complete servers (hardware or virtual), you have virtual operating systems, which are far more lightweight. Instead of carrying around complete environments, you just move applications, with their configuration, from one server to another, where it will consume its resources, without any virtual layers. Shipping over projects from development to operations also is simplified—another boon. Of course, you’ll face new and different challenges, as with any technology, but the possible risks and problems don’t seem to be insurmountable, and the final rewards appear to be great.

    Docker is an open-source project based on Linux containers that is showing high rates of adoption. Docker’s first release was only a couple years ago, so the technology isn’t yet considered mature, but it shows much promise. The combination of lower costs, simpler deployment and faster start times certainly helps.

    In this article, I go over some details of setting up a system based on several independent containers, each providing a distinct, separate role, and I explain some aspects of the underlying network configuration. You can’t think about production deployment without being aware of how connections are made, how ports are used and how bridges and routing are set up, so I examine those points as well, while putting a simple Web database query application in place.

    Basic Container Networking

    Let’s start by considering how Docker configures network aspects. When the Docker service dæmon starts, it configures a virtual bridge, docker0, on the host system (Figure 1). Docker picks a subnet not in use on the host and assigns a free IP address to the bridge. The first try is 172.17.42.1/16, but that could be different if there are conflicts. This virtual bridge handles all host-containers communications.

    When Docker starts a container, by default, it creates a virtual interface on the host with a unique name, such as veth220960a, and an address within the same subnet. This new interface will be connected to the eth0 interface on the container itself. In order to allow connections, iptables rules are added, using a DOCKER-named chain. Network address translation (NAT) is used to forward traffic to external hosts, and the host machine must be set up to forward IP packets.

    Docker uses a bridge to connect all containers on the same host to the local network.

    The standard way to connect a container is in “bridged” mode, as described previously. However, for special cases, there are more ways to do this, which depend on the -net option for the docker run command.

    Reply
  39. Tomi Engdahl says:

    Hands on with Windows Server 2016 Containers
    Containers, Docker support are big new features, but the current preview is rough
    http://www.theregister.co.uk/2015/08/31/hands_on_with_windows_server_2016_containers/

    First Look Microsoft has released Technical Preview 3 of Windows Server 2016, including the first public release of Windows Server Containers, perhaps the most interesting new feature.

    A container is a type of virtual machine (VM) that shares more resources than a traditional VM.

    “For efficiency, many of the OS files, directories and running services are shared between containers and projected into each container’s namespace,”

    Containers are therefore lightweight, so you can run more containers than VMs on a host server. They are also less flexible. Whereas you can run Linux in a VM running on Windows, that idea makes no sense for a container, which shares operating system files with its host.

    Containers have existed for a long time on Unix-like operating systems, but their usage for application deployment increased following the release of Docker as an open source project in early 2013.

    Docker provides a high-level API and tools for managing and deploying Linux container images, and Docker Hub is a public repository of container images.

    Windows developers have missed out on the container fun, but Microsoft is putting that right in Server 2016 and on its Azure cloud platform. Container support is now built into Windows, with two different types on offer:

    Windows Server Containers: Container VMs use shared OS files and memory
    Hyper-V Containers: VMs have their own OS kernel files and memory

    The current technical preview does not support Hyper-V containers.

    In addition, Microsoft has ported Docker to Windows. This means you can use the Docker API and tools with Windows containers. It does not mean that existing Linux-based Docker images will run on Windows, other than via Linux VMs as before.

    Reply
  40. Tomi Engdahl says:

    The container-cloud myth: We’re not in Legoland anymore
    Why interconnectivity in the cloud is tougher than just stacking bricks
    http://www.channelregister.co.uk/2015/08/25/container_cloud_legoland/

    Everything is being decoupled, disaggregated and deconstructed. Cloud computing is breaking apart our notions of desktop and server, mobile is decoupling the accepted concept of wired technology, and virtualisation is deconstructing our understanding of what a network was supposed to be.

    Inside this maelstrom of disconnection, we find this thing we are supposed to call cloud migration. Methods, tools and protocols that guarantee they will take us into the new world of virtualisation, which promises “seamless” migration and robust results.

    It turns out that taking traditional on-premises application structures into hosted virtualised worlds is way more complex than was first imagined.

    Questions of application memory and storage allocation are fundamentally different in cloud environments. Attention must be paid to application Input/Output (I/O) and transactional throughput. The location of your compute engine matters a lot more if data can be on-premises private, hybrid or public cloud located – or, heaven forbid, some combination of the three.

    Clicking & sticking slickness

    The danger comes about when people start talking about interconnectivity in the cloud (and the Big Data that passes through it) and likening new “solutions” to the clicking and sticking slickness ease we enjoy with Lego. It’s just not that simple.

    For all the advantages of microservices, they bring with them a greater level of operational complexity.

    Samir Ghosh, chief executive of Docker-friendly platform-as-a-service provider WaveMaker, reckons that compared with a “monolithic” (meaning non-cloud) application, a microservices-based application may have dozens, hundreds, or even thousands of services, all of which must be managed through to production – and each of those services requires its own APIs.

    Reply
  41. Tomi Engdahl says:

    Security is an Important Coding Consideration Even When You Use Containers (Video)
    http://developers.slashdot.org/story/15/09/23/184233/security-is-an-important-coding-consideration-even-when-you-use-containers-video?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    Last month Tom Henderson wrote an article titled Container wars: Rocket vs. Odin vs. Docker. In that article he said, “All three are potentially very useful and also potentially very dangerous compared to traditional hypervisor and VM combinations.”

    Reply
  42. Tomi Engdahl says:

    Canonical rolls out Ubuntu container management for suits
    Time to get serious with LXD
    http://www.theregister.co.uk/2015/10/22/ubuntu_15_10_server/

    Canonical has kicked out its container management architecture for the suits with Ubuntu 15.10.

    The Linux spinner is today expected to drop its latest disro with the server including final code for its Linux Container Hypervisor (LXD).

    LXD is Canonical’s container management environment which it claims is similar to a hypervisor but of course isn’t a hypervisor.

    The payoff, according to Canonical, is you get the security and performance of a hypervisor, but without the fat overhead.

    Security is the watchword with LXD – a system-level demon that manages how containers are raised and provisioned, treating them like little Linux instances with the requisite level of security.

    Containers – especially Docker – have proved popular with devs – but not with the boys and girls down in security and compliance, because they aren’t isolated like a virtual machine.

    Mark Baker, Canonical’s Ubuntu server and cloud product manager, said: “The security and audit guys want to inspect containers, need to be able to do back up and monitoring and so need something that looks and behaves like a full Linux system.

    “Many organisations have geared their processes to that environment but Docker requires a mindset change.

    Other features in Ubuntu 15.10 OpenStack autopilot, to deploy and manage clouds using this open-source cloud

    Reply
  43. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Tectonic, CoreOS’s Kubernetes-Based Container Platform, Hits General Availability
    http://techcrunch.com/2015/11/03/tectonic-coreoss-hosted-container-platform-hits-general-availability/

    A few months ago, CoreOS launched the preview of Tectonic, a container platform based on the Google-incubated Kubernetes container management and orchestration tool. Starting today, CoreOS considers Tectonic to be out of beta.

    As CoreOS CEO and co-founder Alex Polvi told me, the team worked with hundreds of customers during the beta and found (and fixed) all kinds of issues. Now, however, the team believes the product is ready for commercial use.

    While CoreOS isn’t disclosing its pricing at this point, Polvi tells me that the company will charge according to the aggregate memory used by the cluster. As the amount of memory you need changes, so does the price. “One of the biggest value propositions for containers is that you can work between the cloud and your own data center, so it wasn’t quite fair to charge per node,” Polvi said.

    Reply
  44. Tomi Engdahl says:

    How much do containers thrash VMs in power usage? Thiiiis much
    When process sandboxes talk, electricity meters scarcely need to listen
    http://www.theregister.co.uk/2015/11/06/containers_thrash_vms_in_the_power_consumption_stakes/

    Ericsson researcher Roberto Morabito has compared the power consumption requirements of virtual machines and containers, and found the latter more economical.

    Power Consumption of Virtualization Technologies: an Empirical Investigation put Xen, KVM, Docker and LXC through their paces on a series of tasks and measured how many watts they sucked along the way.

    In tests on Linux machines, Morabito found that the four sets of code produced similar results when asked to perform computational tasks. Things got interesting when measuring VMs and containers while they were sending and receiving TCP traffic, as containers did rather better under that load.

    “This is mainly due to the fact that the network packets have to be processed by extra layers in a hypervisor environment in comparison to a container environment,” Morabito suggests.

    The findings are likely to interest hyperscale IT operations, as power costs are one of such outfits’ biggest expenditures. They may also raise a few eyebrows among those considering running containers inside virtual machines, an arrangement suggested as a way to bring manageability to containers but also one this study suggests may load some extra power costs onto users.

    Power Consumption of Virtualization Technologies: an Empirical Investigation
    http://arxiv.org/abs/1511.01232

    This paper presents the results of a performance comparison in terms of power consumption of four different virtualization technologies: KVM and Xen, which are based on hypervisor virtualization, Docker and LXC which are based on container virtualization.

    Our initial results show how, despite of the number of virtual entities running, both kinds of virtualization alternatives behave similarly in idle state and in CPU/Memory stress test. Contrarily, the results on network performance show differences between the two technologies.

    Reply
  45. Tomi Engdahl says:

    HPE comes over all Docker, throws containers at Helion tools lineup
    Move of its tools kit illustrates interest in the tech
    http://www.theregister.co.uk/2015/11/16/hpe_puts_docker_at_heart_of_dev_products/

    HP Enterprise jumped into the Docker ecosystem with both feet today, running the container technology right through its Helion cloud portfolio.

    The newly-minted, veteran enterprise tech vendor used Dockercon Europe to take the wraps off what it described as “comprehensive set of enterprise class tools and services to help customers develop, test and run Docker environments at enterprise scale”.

    This Docker-ised lineup will span cloud, software, storage and services, it added.

    Top of the list is HPE Helion Development Platform 2.0 with support for Docker, which HPE promised would allow developers and IT operators to deploy microservices as Docker containers.

    adding that Helion Development Platform 2.0 includes the Helion Code Engine, “a continuous integration/continuous deployment (CI/CD) service for automating the build, test and deploy workflow for code”.

    “This service is merged into a Git repository through a Docker Trusted Registry and the Helion Development Platform”, it HPE added. More DevOps/Continuous Delivery boxes ticked.

    Reply
  46. Tomi Engdahl says:

    Docker launches Universal Control Plane at enterprises
    Runs anything on anything, apparently
    http://www.theregister.co.uk/2015/11/17/docker_launches_universal_control_plane/

    Docker flew from the cloud into on-premise computing with the unwrapping of its Universal Control Plane 1.0, which it promised would allow real companies to deploy real containerized apps in real data centres.

    The vendor said the product/service/whatever – debuted at the DockerCon EU event in Barcelona – would give IT ops folk the ability to “retain centralized control of infrastructure provisioning, user management and compliance” across public, private and hybrid clouds. Without compromising developers’ prized agility.

    The docs accompanying the release used the word “any” a lot. By infrastructure, it means compute, network and storage, as well as integration with security and monitoring.

    It promises support for “any” application and “any programming language”, “any” platform as long as it is Windows, Linux or Solaris, and any instance, whether bare metal, VM or cloud. And, because we’re talking development here, you slot it into “any” stage of the development lifecycle from dev to test to QA to production.

    Docker is describing the tech as “enterprise-grade management”

    Reply
  47. Tomi Engdahl says:

    New Hack Shrinks Docker Containers
    http://developers.slashdot.org/story/16/02/03/221231/new-hack-shrinks-docker-containers

    Promising “uber tiny Docker images for all the things,” Iron.io has released a new library of base images for every major language optimized to be as small as possible by using only the required OS libraries and language dependencies. “By streamlining the cruft that is attached to the node images and installing only the essentials, they reduced the image from 644 MB to 29MB,”explains one technology reporter, noting this makes it quicker to download and distribute the image, and also more secure. “Less code/less programs in the container means less attack surface…”

    http://thenewstack.io/microcontainers-iron-ios-new-hack-shrink-docker-containers/
    https://github.com/iron-io/dockers

    Reply
  48. Tomi Engdahl says:

    CoreOS Launches Rkt 1.0
    http://linux.slashdot.org/story/16/02/04/2125244/coreos-launches-rkt-10?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    Docker is about to get some real competition in the container runtime space, thanks to the lofficial aunch of rkt 1.0. CoreOS started building rkt in 2014 and after more than a year of security, performance and feature improvement are now ready to declare it ‘production-ready.’ While rkt is a docker runtime rival, docker apps will run in rkt, giving using a new runtime choice:

    “rkt will remain compatible with the Docker-specific image format, as well as its own native App Container Image (ACI).

    CoreOS Launches Docker Rival Rkt 1.0
    http://www.eweek.com/virtualization/coreos-launches-docker-rival-rkt-1.0.html

    While rkt is a competitor to the Docker runtime, users will still be able to run application containers that have been built with Docker tools. The promise of rkt is that of improved performance and security controls, as well as integration with CoreOS’ larger platform effort Tectonic, which provides orchestration.

    “By marking rkt as 1.0, we are committed to maintain backward compatibility with proper deprecation cycles of all features, user experiences and APIs,” Polvi said.

    As a 1.0 release, rkt will also be integrated into CoreOS’ commercial Tectonic platform, which aims to provide a Google-like platform for application deployment and orchestration. For users concerned about migrating from the Docker runtime to the rkt container runtime, Polvi emphasized that rkt will remain compatible with the Docker-specific image format, as well as its own native App Container Image (ACI). That means developers can build containers with Docker and run those containers with rkt. In addition, CoreOS will support the growing ecosystem of tools based around the ACI format.

    Reply
  49. Tomi Engdahl says:

    Unikernels, Docker, and Why You Should Care
    http://www.linuxjournal.com/content/unikernels-docker-and-why-you-should-care?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    Docker’s recent acquisition of Unikernel Systems has sent pulses racing in the microservice world. At the same time, many people have no clue what to make of it, so here’s a quick explanation of why this move is a good thing.

    Although you may not be involved in building or maintaining microservice-based software, you certainly use it. Many popular Web sites and services are powered by microservices, such as Netflix, eBay and PayPal. Microservice architectures lend themselves to cloud computing and “scale on demand”, so you’re sure to see more of it in the future.

    Better tools for microservices is good news for developers, but it has a benefit for users too. When developers are better supported, they make better software.

    Docker is a tool that allows developers to wrap their software in a container that provides a completely predictable runtime environment.

    VMs have become essential in the high-volume world of enterprise computing. Before VMs became popular, physical servers often would run a single application or service, which was a really inefficient way of using physical resources. Most of the time, only a small percentage of the box’s memory, CPU and bandwidth were used. Scaling up meant buying a new box–and that’s expensive.

    VMs meant that multiple servers could run on the same box at the same time. This ensured that the expensive physical resources were put to use.

    VMs are also a solution to a problem that has plagued developers for years: the so-called “it works on my machine” problem that occurs when the development environment is different from the production environment. This happens very often. It shouldn’t, but it does.

    Although VMs solve a lot of problems, they aren’t without some shortcomings of their own. For one thing, there’s a lot of duplication.

    Containers, such as Docker, offer a more lightweight alternative to full-blown VMs. In many ways, they are very similar to virtual machines. They provide a mostly self-contained environment for running code. The big difference is that they reduce duplication by sharing. To start with, they share the host environment’s Linux kernel. They also can share the rest of the operating system.

    In fact, they can share everything except for the application code and data. For instance, I could run two WordPress blogs on the same physical machine using containers. Both containers could be set up to share everything except for the template files, media uploads and database.

    With some sophisticated filesystem tricks, it’s possible for each container to “think” that it has a dedicated filesystem.

    Containers are much lighter and have lower overhead compared to complete VMs. Docker makes it relatively easy to work with these containers, so developers and operations can work with identical code. And, containers lend themselves to cloud computing too.

    So what about microservices and unikernels?

    Microservices are a new idea–or an old idea, depending on your perspective.

    The concept is that instead of building a big “monolithic” application, you decompose your app into multiple services that talk to each other through a messaging system–a well-defined interface. Each microservice is designed with a single responsibility. It’s focused on doing a single simple task well.

    If that sounds familiar to you as an experienced Linux user, it should. It’s an extension of some of the main tenets of the UNIX Philosophy. Programs should focus on doing one thing and doing it well, and software should be composed of simple parts that are connected by well-defined interfaces.

    Microservices typically run in their own container. They usually communicate through TCP and the host environment (or possibly across a network).

    The advantage of building software using microservices is that the code is very loosely coupled. If you need to fix a bug or add a feature, you need to make only changes in a few places. With monolithic apps, you probably would need to change several pieces of code.

    What’s more, with a microservice architecture, you can scale up specific microservices that are feeling strain. You don’t have to replicate the entire application.

    Linux is a “kitchen sink” system–it includes everything needed for most multi-user environments. It has drivers for the most esoteric hardware combinations known to man.

    Unikernels are a lighter alternative that is well suited to microservices. A unikernel is a self-contained environment that contains only the low-level features that a microservice needs to function. And, that includes kernel features.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*