Docker and other Linux containers

Virtual machines are mainstream in cloud computing. The newest development on the this arena are fast and lightweight process virtualization.  Linux-based container infrastructure is an emerging cloud technology that provides its users an environment as close as possible to a standard Linux distribution.

Linux Containers and the Future Cloud article tells that as opposed to para-virtualization solutions (Xen) and hardware virtualization solutions (KVM), which provide virtual machines (VMs), containers do not create other instances of the operating system kernel. This brings advantage of containers over VMs is that starting and shutting down a container is much faster than starting and shutting down a VM. The idea of process-level virtualization in itself is not new (remember Solaris Zones and BSD jails).

All containers under a host are running under the same kernel. Basically, a container is a Linux process (or several processes) that has special features and that runs in an isolated environment, configured on the host.  Containerization is a way of packaging up applications so that they share the same underlying OS but are otherwise fully isolated from one another with their own CPU, memory, disk and network allocations to work within – going a few steps further than the usual process separation in Unix-y OSes, but not completely down the per-app virtual machine route. The underlying infrastructure of modern Linux-based containers consists mainly of two kernel features: namespaces and cgroups. Well known Linux container technologies are Docker, OpenVZ, Google containers, Linux-VServer and LXC (LinuX Containers).

Docker is an open-source project that automates the creation and deployment of containers. Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows.
Docker started as an internal project by a Platform-as-a-Service (PaaS) company called dotCloud at the time, and now called Docker Inc. Docker is currently available only for Linux (Linux kernel 3.8 or above). It utilizes the LXC toolkit. It runs on distributions like Ubuntu 12.04, 13.04; Fedora 19 and 20; RHEL 6.5 and above; and on cloud platforms like Amazon EC2, Google Compute Engine and Rackspace.

Linux containers are turning to a way of packaging up applications and related software for movement over the network or Internet. You can create images by running commands manually and committing the resulting container, but you also can describe them with a Dockerfile. Docker images can be stored on a public repository. Docker is able to create a snapshot. Docker, the company that sponsors the open source project, is gaining allies in making its commercially supported Linux container format a de facto standard. Red Hat has woken up to the growth of Linux containers and has begun certifying applications running in the sandboxing tech.

Docker was last week a lot in IT news because Docker 1.0 has been released. Here are links to several articles on Docker:

Docker opens online port for packaging and shipping Linux containers

Docker, Open Source Application Container Platform, Has 1.0 Coming Out Party At Dockercon14

Google Embraces Docker, the Next Big Thing in Cloud Computing

Docker blasts into 1.0, throwing dust onto traditional hypervisors

Automated Testing of Hardware Appliances with Docker

Continuous Integration Using Docker, Maven and Jenkins

Getting Started with Docker

The best way to understand Docker is to try it!

This Docker thing looks interesting. Maybe I should spend some time testing it.



  1. Tomi Engdahl says:

    Creating small containers with Buildah

    A technical problem drives home the power of open source collaboration.

  2. Tomi Engdahl says:

    10 layers of Linux container security

    Employ these strategies to secure different layers of the container solution stack and different stages of the container lifecycle.

  3. Tomi Engdahl says:

    Just say no to root (in containers)
    Even smart admins can make bad decisions.

    User namespace is all about running privileged processes in containers, such that if they break out, they would no-longer be privileged outside the container.

    OpenShift is Red Hat’s container platform, built on Kubernetes, Red Hat Enterprise Linux, and OCI containers, and it has a great security feature: By default, no containers are allowed to run as root. An admin can override this, otherwise all user containers run without ever being root. This is particularly important in multi-tenant OpenShift Kubernetes cluster

    Sadly one of the biggest complaints about OpenShift is that users can not easily run all of the community container images available at This is because the vast majority of container images in the world today require root.

    Why do all these images require root?
    If you actually examine the reasons to be root, on a system, they are quite limited.

    Container runtimes can start applications as non-root to begin with. Truth be known, so can systemd, but most software that has been built over the past 20 years assumes it is starting as root and dropping privileges.

    the main reason apache or nginx start out running as root is so that they can bind to port 80 and then become non-root.
    Container runtimes, using port forwarding, can solve this problem. You can set up a container to listen on any network port, and then have the container runtime map that port to port 80 on the host.

    Bottom line
    Almost all software you are running in your containers does not require root. Your web applications, databases, load balancers, number crunchers, etc. do not need to be run as root ever. When we get people to start building container images that do not require root at all, and others to base their images off of non-privileged container images, we would see a giant leap in container security.

    There is still a lot of educating needing to be done around running root inside of containers. Without education, smart administrators can make bad decisions.

  4. Tomi Engdahl says:

    10 tasks for running containers on Atomic Host

    Learn what you can do with Atomic Host, a container operating system that provides a secure foundation for running containers.

  5. Tomi Engdahl says:

    Tom Krazit / GeekWire:
    AWS now fully supports Kubernetes with the general release of Amazon EKS, following other cloud vendors, after announcing the service last year at AWS re:Invent — Amazon Web Services was the last of the major cloud vendors to go all-in with the Kubernetes container-orchestration project …

    Amazon Web Services now fully supports Kubernetes with the general release of Amazon EKS

  6. Tomi Engdahl says:

    Getting started with Buildah

    Buildah offers a flexible, scriptable way to create lean, efficient container images using your favorite tools.

  7. Tomi Engdahl says:

    Introducing conu – Scripting Containers Made Easier

    There has been a need for a simple, easy-to-use handler for writing tests and other code around containers that would implement helpful methods and utilities. For this we introduce conu, a low-level Python library.

    This project has been driven from the start by the requirements of container maintainers and testers. In addition to basic image and container management methods, it provides other often used functions, such as container mount, shortcut methods for getting an IP address, exposed ports, logs, name, image extending using source-to-image, and many others.

  8. Tomi Engdahl says:

    Running integration tests in Kubernetes

    Verify how your application behaves with your full solution stack by using multiple containers to provide a whole test environment.

    Linux containers have changed the way we run, build, and manage applications. As more and more platforms become cloud-native, containers are playing a more important role in every enterprise’s infrastructure. Kubernetes (K8s) is currently the most well-known solution for managing containers, whether they run in a private, public, or hybrid cloud.

    With a container application platform, we can dynamically create a whole environment to run a task and discard it afterward. In an earlier post, we covered how to use Jenkins to run builds and unit tests in containers.

  9. Tomi Engdahl says:

    How to create a cron job with Kubernetes on a Raspberry Pi

    Find a better way to run your scheduled tasks efficiently and reliably.

  10. Tomi Engdahl says:

    Why Kubernetes is The New Application Server

    Have you ever wondered why you are deploying your multi-platform applications using containers? Is it just a matter of “following the hype”? In this article, I’m going to ask some provocative questions to make my case for Why Kubernetes is the new application server.

  11. Tomi Engdahl says:

    Container storage for dummies

    Although many organizations still use traditional storage appliances, they don’t offer the agility needed by containerized environments. Containers are highly flexible and bring incredible scale to how apps and storage are delivered; traditional storage can be the bottleneck that stops this progress. The underlying storage should be highly elastic, easily provisioned by developers and admins, and, ideally, managed using the same orchestration framework (like Kubernetes).

  12. Tomi Engdahl says:

    Building tiny container images

    Here are 5 ways to optimize Linux container size and build small images.


Leave a Comment

Your email address will not be published. Required fields are marked *