Docker and other Linux containers

Virtual machines are mainstream in cloud computing. The newest development on the this arena are fast and lightweight process virtualization.  Linux-based container infrastructure is an emerging cloud technology that provides its users an environment as close as possible to a standard Linux distribution.

Linux Containers and the Future Cloud article tells that as opposed to para-virtualization solutions (Xen) and hardware virtualization solutions (KVM), which provide virtual machines (VMs), containers do not create other instances of the operating system kernel. This brings advantage of containers over VMs is that starting and shutting down a container is much faster than starting and shutting down a VM. The idea of process-level virtualization in itself is not new (remember Solaris Zones and BSD jails).

All containers under a host are running under the same kernel. Basically, a container is a Linux process (or several processes) that has special features and that runs in an isolated environment, configured on the host.  Containerization is a way of packaging up applications so that they share the same underlying OS but are otherwise fully isolated from one another with their own CPU, memory, disk and network allocations to work within – going a few steps further than the usual process separation in Unix-y OSes, but not completely down the per-app virtual machine route. The underlying infrastructure of modern Linux-based containers consists mainly of two kernel features: namespaces and cgroups. Well known Linux container technologies are Docker, OpenVZ, Google containers, Linux-VServer and LXC (LinuX Containers).

Docker is an open-source project that automates the creation and deployment of containers. Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows.
Docker started as an internal project by a Platform-as-a-Service (PaaS) company called dotCloud at the time, and now called Docker Inc. Docker is currently available only for Linux (Linux kernel 3.8 or above). It utilizes the LXC toolkit. It runs on distributions like Ubuntu 12.04, 13.04; Fedora 19 and 20; RHEL 6.5 and above; and on cloud platforms like Amazon EC2, Google Compute Engine and Rackspace.

Linux containers are turning to a way of packaging up applications and related software for movement over the network or Internet. You can create images by running commands manually and committing the resulting container, but you also can describe them with a Dockerfile. Docker images can be stored on a public repository. Docker is able to create a snapshot. Docker, the company that sponsors the open source project, is gaining allies in making its commercially supported Linux container format a de facto standard. Red Hat has woken up to the growth of Linux containers and has begun certifying applications running in the sandboxing tech.

Docker was last week a lot in IT news because Docker 1.0 has been released. Here are links to several articles on Docker:

Docker opens online port for packaging and shipping Linux containers

Docker, Open Source Application Container Platform, Has 1.0 Coming Out Party At Dockercon14

Google Embraces Docker, the Next Big Thing in Cloud Computing

Docker blasts into 1.0, throwing dust onto traditional hypervisors

Automated Testing of Hardware Appliances with Docker

Continuous Integration Using Docker, Maven and Jenkins

Getting Started with Docker

The best way to understand Docker is to try it!

This Docker thing looks interesting. Maybe I should spend some time testing it.



  1. Tomi Engdahl says:

    Containers and Kubernetes: What’s next?

    What’s ahead for container orchestration and Kubernetes? Here’s an expert peek

  2. Tomi Engdahl says:

    AWS’s container service gets support for Kubernetes

    AWS today announced its long-awaited support for the Kubernetes container orchestration system on top of its Elastic Container Service (ECS).

    Kubernetes has, of course, become something of a de facto standard for container orchestration. It already had the backing of Google (which incubated it), as well as Microsoft and virtually every other major cloud player.

  3. Tomi Engdahl says:

    Put Your IDE in a Container with Guacamole

    Put Your IDE in a Container
    Apache Guacamole is an incubating Apache project that enables X window applications to be exposed via HTML5 and accessed via a browser. This article shows how Guacamole can be run inside containers in an OpenShift Container Platform (OCP) cluster to enable Red Hat JBoss Developer Studio, the eclipse-based IDE for the JBoss middleware portfolio, to be accessed via a web browser. You’re probably thinking “Wait a minute… X windows applications in a container?” Yes, this is entirely possible and this post will show you how.

  4. Tomi Engdahl says:

    Getting started with Kubernetes

    Learn the basics of using the open source container management system with this easy tutorial.

    One of today’s most promising emerging technologies is paring containers with cluster management software such as Docker Swarm, Apache Mesos, and the popular Kubernetes. Kubernetes allows you to create a portable and scalable application deployment that can be scheduled, managed, and maintained easily. As an open source project, Kubernetes is continually being updated and improved, and it leads the way among container cluster management software.

  5. Tomi Engdahl says:

    What are Linux containers?

    Linux containers, in short, contain applications in a way that keep them isolated from the host system that they run on. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. And they are designed to make it easier to provide a consistent experience as developers and system administrators move code from development environments into production in a fast and replicable way.

    In a way, containers behave like a virtual machine. To the outside world, they can look like their own complete system.

  6. Tomi Engdahl says:

    Why traditional storage doesn’t cut it in the new world of containers

    One approach is to use traditional storage appliances that support legacy applications. This is a natural inclination and assumption, but… the wrong one.

    Traditional storage appliances are based on decades-old architectures at this point and were not made for a container-based application world. These approaches also fail to offer the portability you need for your apps in today’s hybrid cloud world. Some of these traditional storage vendors offer additional software for your containers, which can be used as a go-between for these storage appliances and your container orchestration, but this approach still falls short as it is undermined by those same storage appliance limitations. This approach would also mean that storage for the container is provisioned separately from your container orchestration layer.

    There’s a better way! Storage containers containing storage software co-­reside with compute containers and serve storage to the compute containers from hosts that have local or direct-attached storage. Storage containers are deployed and provisioned using the same orchestration layer you’ve adopted in house (like Red Hat OpenShift Container Platform, which is Kubernetes based), just like compute containers.

  7. Tomi Engdahl says:

    Containerized Docker Application Lifecycle with Microsoft Platform and Tools

    Building containerized applications in an enterprise environment means that you need to have an end-to-end lifecycle so you are capable of delivering applications through Continuous Integration, Testing, Continuous Deployment to containers, and release management supporting multiple environments, while having solid production management and monitoring systems

  8. Tomi Engdahl says:

    Architecture of Red Hat OpenShift
    Container PaaS on Microsoft Azure

    An effective Platform-as-a-Service (PaaS) solution, in concert with containers and cloud management platforms can help your business deploy and operate applications more quickly, more flexibly and with higher quality. Red Hat OpenShift Container Platform on Microsoft Azure offers this agility and operational efficiency, without having to do your own integration.

  9. Tomi Engdahl says:

    Kubernetes 1.9 version bump is near – with APIs to extend the system
    Latest container wrangling bits should drop on Friday

    Assuming a handful of lingering issues can be resolved, the open-source Kubernetes project will introduce version 1.9 on Friday.

    In a phone interview with The Register, Aparna Sinha, special interest group (SIG) product management lead for Kubernetes and product manager at Google, singled out the general availability designation of the Apps/V1 Workloads API as the most notable aspect of the release.

    Workloads are computing resources used to manage and run containers in a cluster. The Apps Workloads API includes DaemonSet, Deployment, ReplicaSet, and StatefulSet; it’s distinct from the Batch Workloads API, which includes Job and CronJob and has yet to reach general availability.

    The general availability designation (V1) signals the API is robust enough for production usage and implies long-term backwards compatibility.

  10. Tomi Engdahl says:

    As Kubernetes surged in popularity in 2017, it created a vibrant ecosystem

    For a technology that the average person has probably never heard of, Kubernetes surged in popularity in 2017 with a particular group of IT pros who are working with container technology. Kubernetes is the orchestration engine that underlies how operations staff deploy and manage containers at scale.

  11. Tomi Engdahl says:

    5 reasons Kubernetes is the real deal

    Kubernetes will be at the heart of a large and growing percentage of infrastructure—on premises and in the cloud

  12. Tomi Engdahl says:

    Why mobile and containers are better together

    Containers are the next stop on enterprise IT’s mobile journey

    Industry analyst firm Gartner predicts that “by 2022, 70 percent of software interactions in enterprises will occur on mobile devices.” Even today, we see how many organizations have matured in their approach to mobile, from siloed one-off projects towards a more integrated and strategic approach that underpins all aspects of their digital journey – including culture, processes, technology, and business models. As mobile becomes table stakes, however, there are many considerations under the surface that need to be addressed by business and IT.

    Mobile alone is not sufficient in driving today’s digital business.

    When containers meet mobile
    The cloud emerged as a perfect pairing in the early stages of mobile adoption, supporting the agility, performance, and scalability required by enterprise-grade mobile apps. Now, container technologies take this a step further by supporting mobile workloads, which can run and be managed alongside other enterprise application workloads.

    Rather than treating mobile as a separate or special project with a dedicated technology stack, containers enable mobile to become part of modern enterprise application development. This enables mobile to run in its own environment in a container alongside other containerized workloads, such as integration, Internet of Things, web, business automation, and other workloads.

    But why are containers so important? Containers are technologies that allow applications to be packaged and isolated with their entire runtime environment — all dependencies, libraries, and configuration files needed to run, bundled into one convenient package, providing abstraction from the underlying infrastructure. They provide a neat solution to the problem of how to get software to run reliably when moved from one computing environment to another, e.g. from a developer’s laptop to a test environment, from staging to production, or from a physical machine in a data center to a virtual machine in a public or private cloud.

    The organizations that outdo their competitors in the next year and beyond will be able to marry cloud, container technologies, and modern application practices, such as DevOps and microservices architecture,

  13. Tomi Engdahl says:

    How to install and setup Docker on RHEL 7/CentOS 7

    How do I install and setup Docker container on an RHEL 7 (Red Hat Enterprise Linux) server? How can I setup Docker on a CentOS 7? How to install and use Docker CE on a CentOS Linux 7 server?

  14. Tomi Engdahl says:

    Google’s Kelsey Hightower talks Kubernetes and community

    Google’s popular developer advocate shares his thoughts on Kubernetes and why its community is key to its strength.

  15. Tomi Engdahl says:

    Container Images and Hosts: Selecting the Right Components

    We’ve published a new guide to help you select the right container hosts and images for you container workloads – whether it’s a single container running on a single host, or thousands of workloads running in a Kubernetes/OpenShift environment. Why? Because people don’t know what they don’t know and we are here to help.

  16. Tomi Engdahl says:

    First steps in integration of Windows and Linux Containers in OpenShift

    an interesting exploration on the integration of Microsoft Windows Containers and Linux Containers in an OCP Environment. This allows a true bi-modal IT technical implementation by combining the strength of both platforms into one cluster.

  17. Tomi Engdahl says:

    Containers and the question of trust

    The security risks associated with containerised software delivery has become a hot topic in the DevOps community, where operations teams are under pressure to identify security vulnerabilities in their production environments.
    As the use of containers becomes standard practice, existing software development and security methodologies may need to be modified

    patches to container images are made by rebuilding the Docker image with the appropriate patches, and then replacing the existing running containers with the updated image. This change in paradigm often requires enterprises reassess their patching processes.

    Given the level of adoption of open source technologies in container infrastructure, a key to protecting your applications in production is maintaining visibility into your open source components and proactively patching vulnerabilities as they are disclosed.

    Identification of risk is a crucial component of security, and risk is a function of the composition of a container image. Some key questions operations teams need to answer in order to minimise risk include:

    What security risks might present in that base images used for your applications, and how often are they updated?
    If a patch is issued for a base image, what is the risk associated with consuming the patch?

    How many versions behind tip can a project or component be before it becomes too risky to consume?
    Given my tooling, how quickly will I be informed of component updates for dependencies which directly impact my containers?

    Given the structure of a component or project, do malicious actors have an easy way to gain an advantage when it comes to issues raised against the component?

    Defining a container security strategy

    You can’t rely on traditional security tools that aren’t designed to manage the security risks associated with hundreds—or thousands—of containers. Traditional tools are often unable to detect vulnerabilities within containers, leading to a false sense of safety.

    One critical attribute of any container security solution is its ability to identify new containers within the cluster and automatically attest to the security state of the container. The desired security state will of course vary by application

    Most enterprises operate under governance regulations requiring continuous monitoring of infrastructure. This requirement exists for containerised applications as well

    finding and remediating every newly discovered vulnerability in each container can be a challenge

    The bottom line is you need to be proactive about container security to prevent breaches before they happen.

  18. Tomi Engdahl says:

    Cisco throws everything it has at containers, hybrid cloud
    Container Platform hooks Kubernetes to all the Borg’s bits

    Cisco has decided to throw everything it has at containers by releasing its very own “Container Platform”.

    At first blush the Platform isn’t much more than Kubernetes and Cisco doesn’t claim that it can do much more than anyone else’s packaging of the Google-derived container-manager.

    The important bit is the integration with Cisco management products, because Cisco has reached the conclusion that while containers and Kubernetes are very useful, they need network management, persistent storage, load balancing and all the other things that other modes of application deployment rely on when they go into production at scale.

    Cisco is therefore providing hooks into things like its Cloud Centre management service to provide such services.

  19. Tomi Engdahl says:

    Red Hat:
    Red Hat to acquire Kubernetes and containers startup, CoreOS, for $250M — With CoreOS, Red Hat doubles down on technology to help customers build, run and manage containerized applications in hybrid and multicloud environments — Red Hat, Inc. (NYSE: RHT), the world’s leading provider …

    Red Hat to Acquire CoreOS, Expanding its Kubernetes and Containers Leadership

    With CoreOS, Red Hat doubles down on technology to help customers build, run and manage containerized applications in hybrid and multicloud environments

  20. Tomi Engdahl says:

    Running a Python application on Kubernetes

    This step-by-step tutorial takes you through the process of deploying a simple Python application on Kubernetes.

    Kubernetes is an open source platform that offers deployment, maintenance, and scaling features. It simplifies management of containerized Python applications while providing portability, extensibility, and self-healing capabilities.

    You will need Docker, kubectl, and this source code.

    Containerization involves enclosing an application in a container with its own operating system. This full machine virtualization option has the advantage of being able to run an application on any machine without concerns about dependencies.

    Kubernetes supports many persistent storage providers, including AWS EBS, CephFS, GlusterFS, Azure Disk, NFS, etc. I will cover Kubernetes persistence storage with CephFS.

  21. Tomi Engdahl says:

    How technology changes the rules for doing agile

    Containers and Kubernetes were not here when we started doing agile. But they change what used to be the hardest part: Applying agile beyond a small group, to the whole organization

  22. Tomi Engdahl says:

    The CoreOS bet

    More than four years ago Red Hat made a bet. We bet big on containers as the future for how applications would be built, deployed and managed across the hybrid cloud. We bet on the emergence of new industry standards for container runtime, format, orchestration and distribution. We bet on key projects like Kubernetes to be the core of Red Hat OpenShift Container Platform. And ultimately, we bet on Linux as the foundation for this innovation, as it has been for so so many other innovations over the past 20 plus years. Specifically, we bet on Red Hat Enterprise Linux 7 as the new foundation for OpenShift 3, launched at Red Hat Summit 2015.

    Together with Google, CoreOS and so many other contributors, we’ve brought Kubernetes into the mainstream, and with Linux it is becoming the foundation for even greater innovation.

    One of those companies that also jumped in with both feet was CoreOS. Their commitment to Linux, to Kubernetes and to containers technology mirrored our own.

    Today, we are proud to welcome CoreOS to the Red Hat family.

  23. Tomi Engdahl says:

    The full-time job of keeping up with Kubernetes

    There is no such thing as Kubernetes LTS (and that’s fantastic)

  24. Tomi Engdahl says:

    Sylabs launches Singularity Pro, a container platform for high-performance computing

    Sylabs was launched in 2015 to create a container platform specifically designed for scientific and high performance computing use cases

    Docker emerged as the container of engine of choice for developers, but Kurtzer says the container solutions developed early on focused on microservices. He says there’s nothing inherently wrong with that, but it left out some types of computing that relied on processing jobs instead of services, specifically high performance computing.

    He saw Singularity as a Docker for HPC environments, and would run his company in a similar fashion to Docker, leading with the open source project, then building a commercial business on top of it — just as Docker had done.

    Kurtzer now wants to bring Singularity to the enterprise with a focus not just on the HPC commercial market, but other high performance computing workloads such as artificial intelligence, machine learning, deep learning and advanced analytics

  25. Tomi Engdahl says:

    Understanding SELinux labels for container runtimes

    What happens to a container’s MCS label when the container is rebuilt or upgraded?

  26. Tomi Engdahl says:

    How Kubernetes became the solution for migrating legacy applications

    You don’t have to tear down your monolith to modernize it. You can evolve it into a beautiful microservice using cloud-native technologies.

  27. Tomi Engdahl says:

    Kubernetes Services By Example

    In a nutshell, Kubernetes services are an abstraction for pods, providing a stable, virtual IP (VIP) address. As pods may come and go, for example in the process of a rolling upgrade, services allow clients to reliably connect to the containers running in the pods, using the VIP. The virtual in VIP means it’s not an actual IP address connected to a network interface but its purpose is purely to forward traffic to one or more pods. Keeping the mapping between the VIP and the pods up-to-date is the job of kube-proxy, a process that runs on every node, which queries the API server to learn about new services in the cluster.

  28. Tomi Engdahl says:

    4 Reasons Why Kubernetes Is Hot

    Interop expert Brian Gracely explains why the container orchestration platform is so popular.

    Gracely offered four reasons why infrastructure pros should learn more about Kubernetes.

    1. It’s become the industry standard for deploying containers in production
    2. It’s the next big trend for managing virtualized infrastructure
    3. Developers love it
    4. It can run any containerized application

    “Kubernetes is a really interesting technology in that it’s proven with customers in production that you can not only use it to build new, cloud-native microservice applications, but people have also been able to migrate existing applications into containers and run them in Kubernetes,” Gracely said. In addition, it supports cutting-edge application development technology, like serverless architecture. That gives enterprises a lot flexibility for today and into the future.

  29. Tomi Engdahl says:

    Container security fundamentals: 5 things to know

    Can you articulate the core facts about container security – even to skeptics inside your organization? Here are 5 key points

    1. Container security is multi-level
    2. Limit dependencies to limit risk
    3. Reassess existing security practices and tools
    4. Automation plays a security role
    5. Containers help you react to emerging issues

  30. Tomi Engdahl says:

    Automated provisioning in Kubernetes

    Learn how Automation Broker can help simplify management of Kubernetes applications and services

    When deploying applications in a Kubernetes cluster, certain types of services are commonly required. Many applications require a database, a storage service, a message broker, identity management, and so on. You have enough work on your hands containerizing your own application. Wouldn’t it be handy if those other services were ready and available for use inside the cluster?

  31. Tomi Engdahl says:

    This is somewhat old, but I saw it the first time today. So hilarious clip for us nerds :)

    Hitler uses Docker

  32. Tomi Engdahl says:

    You got your VM in my container

    Explore KubeVirt and Kata Containers, two fairly new projects that aim to combine Kubernetes with virtualization

  33. Tomi Engdahl says:

    Netflix could pwn 2020s IT security – they need only reach out and take
    Workload isolation is niche, but they’re rather good at it

    The container is doomed, killed by serverless. Containers are killing Virtual Machines (VM). Nobody uses bare metal servers. Oh, and tape is dead. These, and other clichés, are available for a limited time, printed on a coffee mug of your choice alongside a complimentary moon-on-a-stick for $24.99.

    Snark aside, what does the future of containers really look like?

    Recently, Red Hat’s CEO casually mentioned that containers still don’t power most of the workloads run by enterprises. Some people have seized on this data point to proclaim the death of the container. Some champion the “death” of containers because they believe serverless is the future. Some believe in the immutable glory of virtual machines and wish the end of this upstart workload encapsulation mechanism.

    Containerize this

    Containers are both dead and not dead. Containers are the future of workload packaging, and they’ll be with us for decades. This does not, however, mean that Docker will grow to be a tech titan to rival VMware, or that Red Hat’s borging of CoreOS means it’s now a superpower in waiting.

    Containers exist for two reasons: the first is that application developers are lazy, and they let their applications sprawl all over the place in their host Operating System Environment (OSE). The second reason is that the modern OSE is largely designed more for backwards compatibility than security, and we need containers to keep these apps from infringing on one another.

    Everyone who runs an application should be running that application in a container. The only possible reasons not to do so are that you don’t understand how, or you haven’t quite gotten to that application yet, given the number ahead of it in the queue to be containerized.

    It doesn’t matter if the application lives on an OSE that lives inside a VM, or if it has a box all to itself. Despite the initial hype about using containers for workload consolidation, containers aren’t about packing more workloads on a given system.

    A container is about security, ease of use, and ease of administration. Virtual machines are the interior walls of a building that let multiple groups of applications do their own thing separate from other groups of applications. They serve different purposes.

    The future is containers and virtualization, not containers or virtualization.

  34. Tomi Engdahl says:

    How to create a cron job with Kubernetes on a Raspberry Pi

    Find a better way to run your scheduled tasks efficiently and reliably.

    Kubernetes provides high availability by design. The possibilities that this capability offers are pretty awesome. Need a web server to run constantly? Build a container and throw it in the Kubernetes cluster. Need a service available all the time? Package it and ship it to the Kubernetes cluster.

    Kubernetes has the concept of jobs. To quote the official jobs documentation, “A job creates one or more pods and ensures that a specified number of them successfully terminate.” If you have a pod that needs to run until completion, no matter what, a Kubernetes job is for you. Think of a job as a batch processor.

    Kubernetes cron jobs are a relatively new thing. But I am ecstatic that this is a standard feature in modern Kubernetes clusters. It means that I can tell the cluster one time that I want a job to run at certain times. Since cron jobs build on top of the existing job functionality, I know that the job will be run to completion. The job will run on one of the six nodes I have in my Kubernetes cluster. Even if a pod is destroyed mid-job, it will spin up on another node and run there. High-available cron jobs have been a beast I’ve tried to slay many times. This problem is now solved, and all I have to do is implement it.

  35. Tomi Engdahl says:

    Introduction to Istio; It Makes A Mesh Of Things

    One of the key metrics or performance indicator of a microservices software architecture and environment is lead time (the amount of time it takes to get from idea to production). Many things have an impact on lead time, such as decision-making time, how quickly the code can be implemented, testing, continuous integration, etc.

    The combination of code complexity and code heft (i.e. number of lines of code) can put a drag on an implementation. There’s got to be a better way. And there is!

    Istio is a sidecar container implementation of the features and functions needed when creating and managing microservices. Monitoring, tracing, circuit breakers, routing, load balancing, fault injection, retries, timeouts, mirroring, access control, rate limiting, and more, are all a part of this.

    It also (and this is important), moves operational aspects away from code development and into the domain of operations. Why should a developer be burdened with circuit breakers and fault injections and should they respond to them? Yes, but for handling and/or creating them? Take that out of your code and let your code focus on the underlying business domain.

    Istio’s functionality running outside of your source code introduces the concept of Service Mesh. That’s a coordinated group of one or more binaries that make up a mesh of networking functions. If you haven’t already, you’re going hear about Service Mesh a lot in the coming months.

  36. Tomi Engdahl says:

    Container orchestration top trumps: Let’s just pretend you don’t use Kubernetes already
    Open source or Hotel California, there’s something for everyone

    Container orchestration comes in different flavours, but actual effort must be put into identifying the system most palatable.

    Yes, features matter, but so too does the long-term viability of the platform. There’s been plenty of great technologies in the history of the industry, but what’s mattered has been their viability, as defined by factors such as who owns them, whether they are open source (and therefore sustained by a community), or outright M&A.

    CoreOS, recently bought by Red Hat, offered Fleet. Fleet, alas for Fleet users, was discontinued because Kubernetes “won”.

    First, however, the basics: what is container orchestration? Orchestration platforms are to containers as VMware’s vSphere and vRealize Automation are to Virtual Machines: they are the management, automation and orchestration layers upon which a decade or more of an organization’s IT will ultimately be built.

    Just as few organizations with any meaningful automation oscillate between Microsoft’s Hyper-V and VMware’s ESXi, the container orchestration solutions will have staying power. Over the years an entire ecosystem of products, services, scripts and more will attach to our container orchestration solutions, and walking away from them would mean burning down a significant portion of our IT and starting over.


    Skipping right to the end, Kubernetes’s flavour is that of victory. Kubernetes is now the open-system container orchestration system. Mainframe people – who like to refer to anything that’s not a mainframe as an open system – will cringe at my using the term open system here. I make no apologies.

    The major public clouds pretend to be open systems, but everywhere you turn there’s lock-in. They’re mainframes reborn, and when talking about containers, most of which probably run on the major public clouds.

    Developed by Google, Kubernetes was designed specifically to be an open, accessible container management platform. Google handed the technology to the Cloud Native Computing Foundation (CNCF) – another in a long line of open-source foundations run by a group of technology vendors.

    Kubernetes is part of an emerging stack of technologies that form the backbone of open source IT automation.

    Kubernetes is fantastic for the sorts of workloads that most people place in containers: stateless, composable workloads. They’re the cattle in the cattle versus pets discussion. Some organizations, however, have reason to keep a few pets around. That’s where Mesosphere Marathon comes in.

    Marathon is a container orchestration framework for Apache Mesos that is designed to launch long-running applications. It offers key features for running applications in a clustered environment.

    Hotel California-class

    Amazon’s EC2 Container Service (ECS) stands up a series of EC2 instances and installs Docker on them. It then lashes them together into a cluster and lets you manage them. Basically, it’s Kubernetes, but with a distinctly Hotel California aftertaste.

    Azure Container Service (ACS): ditto what was said about ECS. But with the Amazon replaced with Microsoft in the recipe.

    Google Container Engine (GCE) is Google’s version of the above.

    Cloud Foundry

    Cloud Foundry should be thought of as Openstack for containers. It is corporate Open Source at its finest. Written by VMware and then transferred to Pivotal when Pivotal was spun out

    CoreOS versus Docker versus the world

    Docker Swarm is Docker’s container orchestration offering.

    In a container

    So where does this trot through the landscape leave us? No surprises: in the container orchestration world, Kubernetes is the container-farming king – but it isn’t ruler of all we survey. Mesosphere occupies a decent niche as the kennel for your pets. Just beware Amazon, Azure and Google – these are Hotel California: you can check in your code, but it most likely won’t ever leave.

  37. Tomi Engdahl says:

    Introducing conu – Scripting Containers Made Easier

    There has been a need for a simple, easy-to-use handler for writing tests and other code around containers that would implement helpful methods and utilities. For this we introduce conu, a low-level Python library.

    In addition to basic image and container management methods, it provides other often used functions, such as container mount, shortcut methods for getting an IP address, exposed ports, logs, name, image extending using source-to-image, and many others.

  38. Tomi Engdahl says:

    How Linux containers have evolved

    Containers have come a long way in the past few years. We walk through the timeline

    In the past few years, containers have become a hot topic among not just developers, but also enterprises. This growing interest has caused an increased need for security improvements and hardening, and preparing for scaleability and interoperability. This has necessitated a lot of engineering, and here’s the story of how much of that engineering has happened at an enterprise level at Red Hat.

  39. Tomi Engdahl says:

    Tips for building a Kubernetes proof of concept

    Kubernetes’ powerful automation features can streamline operations, saving time and costs. Here’s how to make a business case for it.

  40. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Nvidia announces support for Kubernetes container orchestration on Nvidia GPUs and will contribute its GPU enhancements to the Kubernetes open source community

    Nvidia brings joy by bringing GPU acceleration to Kubernetes

    This has been a long time coming, but during his GTC keynote, Nvidia CEO Jensen Huang today announced support for the Google-incubated Kubernetes container orchestration system on Nvidia GPUs.

    The idea here is to optimize the use of GPUs in hyperscale data centers — the kind of environments where you may use hundreds or thousands of GPUs to speed up machine learning processes — and to allow developers to take these containers to multiple clouds without having to make any changes.

    “Now that we have all these accelerated frameworks and all this accelerated code, how do we deploy it into the world of data centers?,” Jensen asked. “Well, it turns out there is this thing called Kubernetes . […] This is going to bring so much joy. So much joy.”

    Nvidia is contributing its GPU enhancements to the Kubernetes open-source community. Machine learning workloads tend to be massive, both in terms of the computation that’s needed and the data that drives it. Kubernetes helps orchestrate these workloads and with this update, the orchestrator is now GPU-aware.

    “Kubernetes is now GPU-aware. The Docker container is now GPU-accelerated.

  41. Tomi Engdahl says:

    Stephanie Condon / ZDNet:
    Docker Enterprise Edition 2.0 launches with new features that make it easier to securely embrace Kubernetes and other container orchestration tools

    Docker Enterprise Edition 2.0 makes it easier to use Kubernetes

    The second edition of Docker’s enterprise product adds more security and management features for seamless and safe Kubernetes adoption.


Leave a Comment

Your email address will not be published. Required fields are marked *