Docker and other Linux containers

Virtual machines are mainstream in cloud computing. The newest development on the this arena are fast and lightweight process virtualization.  Linux-based container infrastructure is an emerging cloud technology that provides its users an environment as close as possible to a standard Linux distribution.

Linux Containers and the Future Cloud article tells that as opposed to para-virtualization solutions (Xen) and hardware virtualization solutions (KVM), which provide virtual machines (VMs), containers do not create other instances of the operating system kernel. This brings advantage of containers over VMs is that starting and shutting down a container is much faster than starting and shutting down a VM. The idea of process-level virtualization in itself is not new (remember Solaris Zones and BSD jails).

All containers under a host are running under the same kernel. Basically, a container is a Linux process (or several processes) that has special features and that runs in an isolated environment, configured on the host.  Containerization is a way of packaging up applications so that they share the same underlying OS but are otherwise fully isolated from one another with their own CPU, memory, disk and network allocations to work within – going a few steps further than the usual process separation in Unix-y OSes, but not completely down the per-app virtual machine route. The underlying infrastructure of modern Linux-based containers consists mainly of two kernel features: namespaces and cgroups. Well known Linux container technologies are Docker, OpenVZ, Google containers, Linux-VServer and LXC (LinuX Containers).

Docker is an open-source project that automates the creation and deployment of containers. Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows.
Docker started as an internal project by a Platform-as-a-Service (PaaS) company called dotCloud at the time, and now called Docker Inc. Docker is currently available only for Linux (Linux kernel 3.8 or above). It utilizes the LXC toolkit. It runs on distributions like Ubuntu 12.04, 13.04; Fedora 19 and 20; RHEL 6.5 and above; and on cloud platforms like Amazon EC2, Google Compute Engine and Rackspace.

Linux containers are turning to a way of packaging up applications and related software for movement over the network or Internet. You can create images by running commands manually and committing the resulting container, but you also can describe them with a Dockerfile. Docker images can be stored on a public repository. Docker is able to create a snapshot. Docker, the company that sponsors the Docker.org open source project, is gaining allies in making its commercially supported Linux container format a de facto standard. Red Hat has woken up to the growth of Linux containers and has begun certifying applications running in the sandboxing tech.

Docker was last week a lot in IT news because Docker 1.0 has been released. Here are links to several articles on Docker:

Docker opens online port for packaging and shipping Linux containers

Docker, Open Source Application Container Platform, Has 1.0 Coming Out Party At Dockercon14

Google Embraces Docker, the Next Big Thing in Cloud Computing

Docker blasts into 1.0, throwing dust onto traditional hypervisors

Automated Testing of Hardware Appliances with Docker

Continuous Integration Using Docker, Maven and Jenkins

Getting Started with Docker

The best way to understand Docker is to try it!

This Docker thing looks interesting. Maybe I should spend some time testing it.

 

340 Comments

  1. Tomi Engdahl says:

    Containerisation of applications and operations Ubuntu offers the most complete portfolio of container support in a Linux distribution, with snaps for singleton apps, LXD by Canonical for lightweight VM-style machine containers, and auto-updating Docker snap packages available in stable, candidate, beta and edge channels. The Canonical Distribution of Kubernetes 1.6, representing the very latest upstream container infrastructure, has 100% compatibility with Google’s hosted Kubernetes service GKE.

    Snaps are now supported across 10 Linux distributions. More than 1,000 snaps have been published

    Linux devices and sectors such as IoT (eg Blue Horizon, openHAB), networking (eg Quagga, Free Range Routing, SONIC, Keepalived), desktop (KDE, web, libreoffice and GNOME apps), and cloud / server (for example Kubernetes, etcd, Rocket.Chat). GNOME Software supports snap installation directly from a link on a web page

    Source: https://insights.ubuntu.com/2017/04/13/ubuntu-17-04-supports-widest-range-of-container-capabilities/

    Reply
  2. Tomi Engdahl says:

    Docker brings containerization to legacy apps
    https://techcrunch.com/2017/04/19/docker-announces-new-containerization-service-for-legacy-apps/?sr_share=facebook

    At the DockerCon conference today in Austin, Docker announced a new service called the Modernize Traditional Applications (MTA) Program that enables customers to move certain legacy apps into Docker containers, put them under management of Docker Enterprise Edition and prepare them for use on more modern infrastructure.

    Traditionally, applications have been delivered as a single monolithic entity. With micro services, the holy grail of containerization, you break down your application into discrete pieces, making it much easier to deploy and manage. In this kind of environment developers can concentrate on programming tasks and the operations team can worry about deploying the applications. This is typically referred to as a DevOps

    The company found if the legacy app meets certain criteria, they can actually guarantee they can move it to a container successfully for fixed price within a defined time period.

    He said that companies are spending up to 80 percent of their IT budgets supporting these legacy applications, and believes that if Docker can offer a way to reduce that spending by moving them to more modern architecture without a lot of heavy lifting, the exercise would seem to be a no-brainer, especially with the guarantee.

    Reply
  3. Tomi Engdahl says:

    Flatpak and Snaps aren’t destined for graveyard of failed Linux tech yet
    Independence from distros
    https://www.theregister.co.uk/2017/04/28/snap_flatpacks_the_future_of_desktop_linux/

    There’s a change coming to the world of Linux that’s potentially big enough to make us rethink what a distro is and how it works. That change is Ubuntu’s Snap packages and the parallel effort dubbed Flatpaks.

    If you’re even remotely up to speed with trends in server computing you’ll have heard of containers. Snaps and Flatpaks are more or less the desktop versions.

    While there’s still some polishing needed in most distros’ current implementations of Snap packages that I’ve used, for the most part the experience from the user’s point of view is pretty much the same as any other software.

    I don’t think that distros will disappear as a result of Flatpaks/Snaps, but I do think that the division between rolling release distros like Arch and conservative distros like Debian will be less important.

    Reply
  4. Tomi Engdahl says:

    Breaking up the Container Monolith
    https://developers.redhat.com/blog/2017/05/04/breaking-up-the-container-monolith/?sc_cid=7016000000127ECAAY

    If you look at containers, containers are just linux in a standard image format. So, what does OpenShift/Kubernetes need to run a container? It needs a standard container format. It needs a way to pull and push containers to/from registries. It needs a way to explode the container to disk. It needs a way to execute the container image. Finally and optionally, it needs a container management API. Red Hat has been working on these problems under the ‘Project Atomic’ flag. This is a collection of tools, all under one CLI command called atomic.

    Reply
  5. Tomi Engdahl says:

    Make Your Containers Transparent with Puppet’s Lumogon
    http://www.linuxjournal.com/content/make-your-containers-transparent-puppets-lumogon

    As development and IT shops look for ways to more quickly test and deploy software or scale out their environments, containers have become a go-to solution. With Docker and similar tools, you can spin up dev and production containerized platforms that are fast, lightweight and consistent.

    The benefit and Achilles heel of containers is that they’re often just black boxes. If you run an Ubuntu or CentOS container, it just works. If you run them with a :latest flag, you get the latest version—whatever that might be. That might be okay for development or a quick test, but not for production.

    With Puppet’s new Lumogon, you can get past containers’ shortcomings. It quickly gathers detailed data about what’s inside all the containers running on a host, and it presents the results in JSON or on the Lumogon website. Instead of trying to gather information that’s scattered in Dockerfiles, CI/CD jobs, UATs or source control documents, Lumogon gathers it in single, centralized reports.

    It works by harvesting metadata from the Docker API and each running container’s namespace using Lumogon’s open-source inspector tool.

    Lumogon outputs reports in JSON, which can be parsed and piped into a range of analysis tools.

    You might imagine other ways you could use the Lumogon report data, such as writing automated CI tests that ensure an image doesn’t contain a vulnerability.

    If you regularly use Dockerfiles to build your container images, you’re probably in the habit of adding labels. You can include as many as you like, and they serve as markers that provide intelligence that can be standardized and shared. Lumogon can report on these labels and give you the information alongside other host and container data.

    Reply
  6. Tomi Engdahl says:

    3 benefits you didn’t expect from Linux containers
    http://www.networkworld.com/article/3197121/linux/3-benefits-you-didnt-expect-from-linux-containers.html

    Containers started with technical people, so many of the benefits are known instinctively by users. But if you are new to containers, these benefits might not be so obvious.

    1. Containers take the fear out of downgrades

    Say you come into an operations scenario and have a six-hour change window. You have to upgrade 1,000 servers, and 200 fail. With containers, you can roll back in seconds instead of many hours.

    2. Containers help developers and systems administration staff collaborate by defining a contract

    3. Containers will help companies retain talent

    I don’t think I am overstating things by saying containers are cool (at least in the technology world). If your company can say it is using Linux containers to innovate and meet dynamic business demand, you’re likely to attract and retain the most talented and forward-thinking tech employees—everybody wants to work on the technology.

    Reply
  7. Tomi Engdahl says:

    Live stream to YouTube with your Raspberry Pi and Docker
    http://blog.alexellis.io/live-stream-with-docker/

    In this guide we’ll set up our Raspberry Pi so that we can stream live video to YouTube to either a public, unlisted or private stream.

    Using a pre-built Docker image means we know exactly what we’re getting and instead of having to go through lots of manual steps we can type in a handful of commands and get started immediately.

    Reply
  8. Tomi Engdahl says:

    The Linux cloud swap that spells trouble for Microsoft and VMware
    Containers just wanna be hypervisors
    https://www.theregister.co.uk/2017/06/01/linux_open_source_container_threat_to_vmware_microsoft/

    Just occasionally, you get it right. Six years ago, I called containers “every sysadmin’s dream,” and look at them now. Even the Linux Foundation’s annual bash has been renamed from “LinuxCon + CloudOpen + Embedded Linux Conference” to “LinuxCon + ContainerCon”.

    Why? Because since virtualization has been enterprise IT’s favourite toy for more than a decade, the rise of “cloud computing” has promoted this even more. When something gets that big, everyone jumps on board and starts looking for an edge – and containers are much more efficient than whole-system virtualization, so there are savings to be made and performance gains to win. The price is that admins have to learn new security and management skills and tools.

    But an important recent trend is one I didn’t expect: these two very different technologies beginning to merge.

    Traditional virtualization is a special kind of emulation: you emulate a system on itself. Mainframes have had it for about 40 years, but everyone thought it was impossible on x86. All the “type 1″ and “type 2 hypervisor” stuff is marketing guff – VMware came up with a near-native-speed PC emulator for the PC. It’s how everything from KVM to Hyper-V works. Software emulates a whole PC, from the BIOS to the disks and NICs, so you can run one OS under another.

    It’s conceptually simple. The hard part was making it fast. VMware’s big innovation was running most of the guest’s code natively, and finding a way to trap just the “ring 0″ kernel-mode code and run only that through its software x86 CPU emulation. Later, others worked out how and did the same, then Intel and AMD extended their chips to hardware-accelerate running ring-0 code under another OS – by inserting a “ring -1″ underneath.

    But it’s still very inefficient.

    Yes, it’s improved, there are good management tools and so on, but all PC OSes were designed around the assumption that they run on their own dedicated hardware. Virtualization is still a kludge – but just one so very handy that everyone uses it.

    That’s why containers are much more efficient: they provide isolation without emulation. Normal PC OSes are divided into two parts: the kernel and drivers in ring 0, and all the ordinary unprivileged code – the “GNU” part of GNU/Linux – and your apps, in ring 3.

    With containers, a single kernel runs multiple separate, walled-off userlands (the ring 3 stuff). Each thinks it’s the only thing on the machine. But the kernel keeps total control of all the processes in all the containers.

    There’s no emulation, no separate memory spaces or virtual disks. A single kernel juggles multiple processes in one memory space, as it was designed to do. It doesn’t matter if a container holds one process or a thousand. To the kernel, they’re just ordinary programs – they load and can be paused, duplicated, killed or restarted in milliseconds.

    The hypervisor that isn’t a hypervisor

    Canonical has come up with something like a combination – although it admittedly has limitations. Its LXD “containervisor” runs system containers – ones holding a complete Linux distro from the init system upwards. The “container machines” share nothing but the kernel, so they can contain different versions of Ubuntu to the host – or even completely different distros.

    LXD uses btrfs or zfs to provide snapshotting and copy-on-write, permitting rapid live-migration between hosts. Block devices on the host – disk drives, network connections, almost anything – can be dedicated to particular containers, and limits set, and dynamically changed, on RAM, disk, processor and IO usage. You can change how many CPU cores a container has on the fly, or pin containers to particular cores.

    … and containers that aren’t really containers

    What’s the flipside of trying to make containers look like VMs? A hypervisor trying very hard to make VMs look like containers, complete with endorsement from an unexpected source.

    When IBM invented hypervisors back in the 1960s, it created two different flavours of mainframe OS – ones designed to host others in VMs, and other radically different ones designed solely to run inside VMs.

    Some time ago, Intel modified Linux into something akin to a mainframe-style system: a dedicated guest OS, plus a special hypervisor designed to run only that OS. The pairing of a hypervisor that will only run one specific Linux kernel, plus a kernel that can only run under that hypervisor, allowed Intel to dispense with a lot of baggage on both sides.

    The result is a tiny, simple hypervisor and tiny VMs, which start in a fraction of a second and require a fraction of the storage of conventional ones, with almost no emulation involved. In other words, much like containers.

    Intel announced this under the slightly misleading banner of “Clear Containers” some years ago. It didn’t take the world by storm, but slowly, support is growing. First, CoreOS added support for Clear Containers into container-based OSes. Later, Microsoft added it to Azure. Now, though, Docker supports it, which might speed adoption.

    Summary? Now both Docker and CoreOS rkt containers can be started in actual VMs, for additional isolation and security – whereas a Linux distro vendor is offering a container system that aims to look and work like a hypervisor. These are strange times.

    Reply
  9. Tomi Engdahl says:

    Qualys Launches Container Security Product
    http://www.securityweek.com/qualys-launches-container-security-product

    Cloud-based security and compliance solutions provider Qualys on Monday announced a new product designed for securing containers across cloud and on-premises deployments.

    Qualys Container Security, which the company expects to become available in beta starting in July 2017, aims to help organizations proactively integrate security into container deployments and DevOps processes by extending visibility, vulnerability detection and policy compliance checks.

    One of the main features of the initial release will allow users to discover containers and track changes in real time. Organizations can visualize assets and relationships, enabling them to identify and isolate exposed elements.

    The product also provides vulnerability analysis capabilities for images, registries and containers. These capabilities can be integrated via the Qualys API into an organization’s Continuous Integration (CI) and Continuous Development (CD) tool chains, allowing DevOps and security teams to scan container images for known flaws before they are widely distributed.

    “Containers are core to the IT fabric powering digital transformation,” said Philippe Courtot, chairman and CEO of Qualys. “Our new solution for containers enables customers on that journey to incorporate 2-second visibility and continuous security as a critical part of their agile development.”

    Reply
  10. Tomi Engdahl says:

    SQL Server on Linux
    Jun 15, 2017 By John S. Tonello
    http://www.linuxjournal.com/content/sql-server-linux

    When Wim Coekaerts, Microsoft’s vice president for open source, took the stage at LinuxCon 2016 in Toronto last summer, he came not as an adversary, but as a longtime Linux enthusiast promising to bring the power of Linux to Microsoft and vice versa. With the recent launch of SQL Server for Linux, Coekaerts is clearly having an impact.

    PowerShell for Linux and bash for Windows heralded the beginning, but the arrival of SQL Server, one of the most popular relational databases out there, offers Linux shops some real opportunities—and a few conundrums.

    Clearly, the opportunity to deploy SQL Server on something other than a Windows Server means you can take advantage of the database’s capabilities without having to manage Windows hosts to do it.

    You can install SQL Server on Red Hat Enterprise Linux 7.3, Ubuntu 16.04, SUSE Linux Enterprise Server v12 SP2 or pretty much anywhere as a Docker container.
    Installing and Running

    To get a taste of SQL Server for Linux, I decided to run it from a Docker image running on a separate Ubuntu 16.04 box with well more than the 4GB of RAM and 4GB of storage required. I set it up on a remote Linux host so I could test remote connections.

    Pulling SQL Server from Docker is trivial:

    $ sudo docker pull microsoft/mssql-server-linux

    Depending on your network speed, this will set up the image in just a couple minutes. When the pull is complete, you can start the SQL Server container from the command line with a few straightforward parameters:

    $ sudo docker run -e ‘ACCEPT_EULA=Y’ -e ‘SA_PASSWORD=(!)Superpassword’ -p
    1433:1433 -d microsoft/mssql-server-linux

    Reply
  11. Tomi Engdahl says:

    What is Docker and why is it so darn popular?
    http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/

    Docker is hotter than hot because it makes it possible to get far more apps running on the same old servers and it also makes it very easy to package and ship programs. Here’s what you need to know about it.

    If you’re in data center or cloud IT circles, you’ve been hearing about containers in general and Docker in particular non-stop for a few years now. With the release of Docker 1.0 in June 2014, the buzz became a roar.

    All the noise is happening because companies are adopting Docker at a remarkable rate. At OSCon in July 2014, I ran into numerous businesses that were already moving their server applications from virtual machines (VM) to containers. Indeed, James Turnbull, Docker’s VP of services and support, told me at the conference that three of the largest banks that had been using Docker in beta were moving it into production. That’s a heck of a confident move for any 1.0 technology, but it’s almost unheard of in the safety-first financial world.

    Three years later, Docker is bigger than ever. Forrester analyst Dave Bartoletti thinks only 10 percent of enterprises currently use containers in production now, but up to a third are testing them. 451 Research agrees. By 451′s count, container technologies, most of it Docker, generated $762 million in revenue in 2016. In 2020, 451 forecasts revenue will reach $2.7 billion, for a 40 percent compound annual growth rate (CAGR).

    Docker, an open-source technology, isn’t just the darling of Linux powers such as Red Hat and Canonical. Proprietary software companies such as Microsoft have also embraced Docker.

    VM hypervisors, such as Hyper-V, KVM, and Xen, all are “based on emulating virtual hardware. That means they’re fat in terms of system requirements.”

    Containers, however, use shared operating systems. That means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This in turn means you can “leave behind the useless 99.9 percent VM junk, leaving you with a small, neat capsule containing your application,” said Bottomley.

    Therefore, according to Bottomley, with a perfectly tuned container system, you can have as many as four-to-six times the number of server application instances as you can using Xen or KVM VMs on the same hardware.

    Containers are an old idea.

    Containers date back to at least the year 2000 and FreeBSD Jails. Oracle Solaris also has a similar concept called Zones while companies such as Parallels, Google, and Docker have been working in such open-source projects as OpenVZ and LXC (Linux Containers) to make containers work well and securely.

    Indeed, few of you know it, but most of you have been using containers for years. Google has its own open-source, container technology lmctfy (Let Me Contain That For You). Anytime you use some of Google functionality — Search, Gmail, Google Docs, whatever — you’re issued a new container.

    Docker, however, is built on top of LXC.

    This, in turn, means that one thing hypervisors can do that containers can’t is to use different operating systems or kernels.

    Docker brings several new things to the table that the earlier technologies didn’t. The first is that it’s made containers easier and safer to deploy and use than previous approaches. In addition, because Docker’s partnering with the other container powers, including Canonical, Google, Red Hat, and Parallels, on its key open-source component libcontainer, it’s brought much-needed standardization to containers.

    In a nutshell, here’s what Docker can do for you: It can get more applications running on the same hardware than other technologies; it makes it easy for developers to quickly create ready-to-run containered applications; and it makes managing and deploying applications much easier.

    Reply
  12. Tomi Engdahl says:

    Jetico’s BestCrypt Container Encryption for Linux
    http://www.linuxjournal.com/content/jeticos-bestcrypt-container-encryption-linux-0?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    Cyber-attacks are now constant, threats to privacy are increasing, and more rigid regulations are looming worldwide. To help IT folks relax in the face of these challenges, Jetico updated its BestCrypt Container Encryption solution to include Container Guard.

    This unique feature of Jetico’s Linux file encryption protects container files from unauthorized or accidental commands—like copying, modification, moving, deletion and re-encryption—resulting in bolstered security and more peace of mind. Only users with the admin password can disable Container Guard, increasing the security of sensitive files.

    http://www.jetico.com/

    Reply
  13. Tomi Engdahl says:

    Containing container chaos with Kubernetes
    https://opensource.com/life/16/9/containing-container-chaos-kubernetes?sc_cid=7016000000127ECAAY

    You’ve made the switch to Linux containers. Now you’re trying to figure out how to run containers in production, and you’re facing a few issues that were not present during development. You need something more than a few well-prepared Dockerfiles to move to production. What you need is something to manage all of your containers: a container orchestration system.

    Reply
  14. Tomi Engdahl says:

    AWS Quickstart for Kubernetes
    http://www.linuxjournal.com/content/aws-quickstart-kubernetes?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    Kubernetes is an open-source cluster manager that makes it easy to run Docker and other containers in production environments of all types (on-premises or in the public cloud). What is now an open community project came from development and operations patterns pioneered at Google to manage complex systems at internet scale.

    AWS Quick Starts are a simple and convenient way to deploy popular open-source software solutions on Amazon’s infrastructure. While the current Quick Start is appropriate for development workflows and small team use, we are committed to continuing our work with the Amazon solutions architects to ensure that it captures operations and architectural best practices. It should be easy to get started now, and achieve long term operational sustainability as the Quick Start grows.

    Reply
  15. Tomi Engdahl says:

    Introducing ReShifter for Kubernetes: Backup, Restore, Migrate, Upgrade
    https://blog.openshift.com/introducing-reshifter-kubernetes-backup-restore-migrate-upgrade/?sc_cid=7016000000127ECAAY

    Are you using Kubernetes beyond development and testing, in a production setup? Would you like to be able to back up and restore your entire Kubernetes cluster (automatically) and restore it with a single action? What about upgrading to a new version of Kubernetes? Can you do this without downtimes? If you found yourself nodding along and/or answering “Yes” to any of the above questions, do read on.

    Reply
  16. Tomi Engdahl says:

    runC, a lightweight universal container runtime, is a command-line tool for spawning and running containers according to the Open Container Initiative (OCI) specification. That’s the short version.
    https://opensource.com/life/16/8/runc-little-container-engine-could?sc_cid=7016000000127ECAAY

    https://runc.io/

    Reply
  17. Tomi Engdahl says:

    How to prepare and use Docker for web pentest by Júnior Carreiro
    https://pentestmag.com/prepare-use-docker-web-pentest-junior-carreiro/

    Docker is the world’s leading software containerization platform. Using Docker we can create different environments for each Pentest type. With the use of containers, you can save each environment on a USB stick or leave it in the cloud. For exemple, you can use the environment in the cloud or copy to any computer or laptop, regardless of distribution.

    Reply
  18. Tomi Engdahl says:

    How Linux containers have evolved
    https://opensource.com/article/17/7/how-linux-containers-evolved?sc_cid=7016000000127ECAAY

    Containers have come a long way in the past few years. We walk through the timeline.

    Reply
  19. Tomi Engdahl says:

    Flexible Container Images with OpenShift
    https://blog.openshift.com/flexible-container-images/?sc_cid=7016000000127ECAAY

    This post will describe proposed behaviors of a class of container I’ll call “flexible containers.” I’ll describe a few aspects of the container image, but also the behavior of the running container. The flexible container concept focuses on building container images in such a way that customization and configuration of software components is enabled and well documented.

    Reply
  20. Tomi Engdahl says:

    Tom Krazit / GeekWire:
    Ex-Twitter engineers raise $10.5M Series A for microservices management startup Buoyant — Buoyant, a 13-person startup led by former Twitter engineers and now backed by a former member of Twitter’s board of directors, has raised a $10.5 million Series A round to apply lessons learned …

    Former Twitter engineers land $10.5M for startup Buoyant, leveraging lessons from the ‘fail whale’
    https://www.geekwire.com/2017/two-engineers-helped-kill-twitters-fail-whale-land-10-5m-buoyant-thinks-missing-link-microservices/

    Buoyant, a 13-person startup led by former Twitter engineers and now backed by a former member of Twitter’s board of directors, has raised a $10.5 million Series A round to apply lessons learned from revamping Twitter’s infrastructure to simplify the emerging world of microservices.

    Microservices are an evolution of software development strategies that has gained converts over the last several years. Developers used to build “monolithic” applications with one huge code base and three main components: the user-facing experience, a server-side application server that does all the heavy lifting, and a database. This is a fairly simple approach, but there are a few big problems with monolithic applications: they scale poorly and are difficult to maintain over time because every time you change one thing, you have to update everything.

    So microservices evolved inside of webscale companies like Google, Facebook, and Twitter as an alternative. When you break down a monolithic application into many smaller parts called services, which are wrapped up in containers like Docker, you only have to throw extra resources at the services that need help and you can make changes to part of the application without having to monkey with the entire code base.

    The price for this flexibility, however, is complexity.

    “That’s the biggest lesson we learned at Twitter,” said Morgan, the startup’s CEO. “It’s not enough to deploy stuff and package it up and run it in an orchestrator (like Kubernetes) … you’ve introduced something new, which is this significant amount of service-to-service communication” that needs to be tracked and understood to make sure the app works as designed, he said.

    Buoyant’s solution is what the company calls a “service mesh,” or a networked way for developers to monitor and control the traffic flowing between services as a program executes.

    Linkerd is the manifestation of its approach,

    “we’re only going to be successful as a company if we get Linkerd adoption,” Morgan said.

    This approach might sound familiar. In May, Google, IBM, and Lyft released Istio, a different open-source project aimed at accomplished many of these same goals by improving the visibility and control of service-to-service communications.

    In a blog post scheduled to go live Tuesday, Buoyant plans to announce that it supports Istio with the latest release of Linkerd, and while the projects appear to be somewhat competitive, the company bent over backwards to emphasize that it sees Istio as a complementary part of a microservices architecture.

    https://linkerd.io/
    Resilient service mesh for cloud native apps
    linker∙d is a transparent proxy that adds service discovery, routing, failure handling, and visibility to modern software applications

    Reply
  21. Tomi Engdahl says:

    How to install and setup LXC (Linux Container) on Fedora Linux 26
    https://www.cyberciti.biz/faq/how-to-install-and-setup-lxc-linux-container-on-fedora-linux-26/

    LXC is an acronym for Linux Containers. It is nothing but an operating system-level virtualization technology for running multiple isolated Linux distros (systems containers) on a single Linux host. This tutorial shows you how to install and manage LXC containers on Fedora Linux server.

    The LXC often described as a lightweight virtualization technology. You can think LXC as chrooted jail on steroids.

    You can run CentOS, Fedora, Ubuntu, Debian, Gentoo or any other Linux distro using LXC.

    How can I create a Ubuntu Linux container?

    Type the following command to create Ubuntu 16.04 LTS container:
    $ sudo lxc-create -t download -n ubuntu-c1 — -d ubuntu -r xenial -a amd64

    Reply
  22. Tomi Engdahl says:

    Java inside docker: What you must know to not FAIL
    https://developers.redhat.com/blog/2017/03/14/java-inside-docker/

    Many developers are (or should be) aware that Java processes running inside Linux containers (docker, rkt, runC, lxcfs, etc) don’t behave as expected when we let the JVM ergonomics set the default values for the garbage collector, heap size, and runtime compiler. When we execute a Java application without any tuning parameter like “java -jar mypplication-fat.jar”, the JVM will adjust by itself several parameters to have the best performance in the execution environment.

    This blog post takes a straightforward approach to show developers what they should know when packaging their Java applications inside Linux containers.

    What is the solution?

    A slight change in the Dockerfile allows the user to specify an environment variable that defines extra parameters for the JVM.

    Conclusion

    The Java JVM until now doesn’t provide support to understand that it’s running inside a container and that it has some resources like those that are memory and CPU restricted. Because of that, you can’t let the JVM ergonomics take the decision by itself regarding the maximum heap size.

    One way to solve this problem is using the Fabric8 Base image that is capable of understanding that it is running inside a restricted container and it will automatically adjust the maximum heap size if you haven’t done it yourself.

    There’s an experimental support in the JVM that has been included in JDK9 to support cgroup memory limits in container (i.e. Docker) environments. Check it out: http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/5f1d1df0ea49

    Reply
  23. Tomi Engdahl says:

    Configuring Containerized Services
    https://developers.redhat.com/blog/2017/07/10/configuring-containerized-services/?sc_cid=7016000000127ECAAY

    Let’s say we have an application composed of the following:

    Web service
    Database
    Key-value store
    Worker

    Let’s focus on the database now. Container images for databases usually come with an easy way to configure them via environment variables. The great thing about this approach is how easy to use it is

    Reply
  24. Tomi Engdahl says:

    How to deploy Kubernetes on the Raspberry Pi
    https://opensource.com/article/17/3/kubernetes-raspberry-pi?sc_cid=7016000000127ECAAY

    In a few steps, set up your Raspberry Pi with Kubernetes using Weave Net.

    With the Weave Net 1.9 release, Weave Net how has ARM support. Kubeadm (and Kubernetes in general) works on multiple platforms. You can deploy Kubernetes to ARM with Weave just as you would on any AMD64 device by installing Docker, kubeadm, kubectl, and kubelet as usual on all machines. Then, initialize the master machine

    …..
    And that’s it! Kubernetes is deployed to your Raspberry Pi device. You didn’t have to do anything special compared to running on Intel/AMD64; Weave Net on ARM just works.

    Reply
  25. Tomi Engdahl says:

    Tom Krazit / GeekWire:
    Microsoft debuts Azure Container Instances, which make it easier to deploy and bill containers on Azure, and joins foundation that oversees Kubernetes

    Microsoft unveils Azure Container Instances, joins Cloud Native group, isolating AWS on Kubernetes
    https://www.geekwire.com/2017/microsoft-launches-new-container-service-joins-cloud-native-group-isolating-aws-kubernetes/

    Microsoft’s cloud business is making two notable moves involving containers Wednesday — unveiling a new service that aims to make it much easier to get up and running with containers, and joining a key industry foundation that oversees the open-source Kubernetes container orchestration project.

    The moves, embracing an orchestration technology that originated inside Google, bring Microsoft’s container strategy into sharper focus and present some interesting decisions for public cloud juggernaut Amazon Web Services.

    Microsoft’s new Azure Container Instances service, available as a public preview for Linux containers, allows developers to start containers on Azure and have them billed by the second. Containers are already attractive to developers because they spin up much faster than virtual machines, but Microsoft said ACI is “the fastest and easiest way to run a container in the cloud,” in a post Wednesday morning.

    Reply
  26. Tomi Engdahl says:

    Malware? In my Docker container? It’s more common than you think
    Researchers say software prisons can hide nasty attack payloads
    https://www.theregister.co.uk/2017/07/28/malware_docker_containers/

    Black Hat Docker containers are the perfect disguise for malware infections, warn researchers.

    Speaking at the 2017 Black Hat USA conference in Las Vegas, Aqua Security researchers Michael Cherny and Sagie Dulce said [PDF] the Docker API can be abused for remote code execution and security bypass.

    Popular with developers as a way to test code, Docker allows for an entire IT stack (OS, firmware, and applications) to be run within an enclosed environment called a container. While the structure has great appeal for trying out code, it could also be abused by attackers to get malware infections running within a company.

    By targeting the developers for invasion, the researchers explain, attackers could not only get their malware code running in the company network, but could do so with heightened privileges.

    The attack involves duping the victim into opening a webpage controlled by the attacker, then using a REST API call to execute the Docker Build command to create a container that will execute arbitrary code. Through a technique called Host Rebinding, the attacker can bypass Same-Origin Policy protections and gain root access to the underlying Moby Linux VM.

    The Aqua Security duo says they have already reported one of the attack vectors – the vulnerable TCP component – to Docker, which has issued an update to remedy the flaw.

    Still, Cherny and Dulce say that other flaws in Docker could be exploited to not only infect the container, but the host machines and other VMs running on the system as well.

    “It is important to scan images to remove malicious code or vulnerabilities that may be exploited. Additionally, runtime protection ensures that your containers ‘behave’ and don’t perform any malicious actions.”

    https://www.blackhat.com/docs/us-17/thursday/us-17-Cherny-Well-That-Escalated-Quickly-How-Abusing-The-Docker-API-Led-To-Remote-Code-Execution-Same-Origin-Bypass-And-Persistence_wp.pdf

    Reply
  27. Tomi Engdahl says:

    Why containers are the best way to test software performance
    https://opensource.com/article/17/8/containers-software-performance-and-scale?sc_cid=7016000000127ECAAY

    Containers can simulate real-life workloads for enterprise applications without the high cost of other solutions.

    Software performance and scalability are frequent topics when we talk about application development. A big reason for that is an application’s performance and scalability directly affect its success in the market.

    There are also tools to help identify the causes of performance and scalability issues, and other benchmark tools can stress-test systems to provide a relative measure of a system’s stability under a high load; however, we run into problems with performance and scale engineering when we try to use these tools to understand the performance of enterprise products. Generally, these products are not single applications

    We may not get any meaningful data about a product’s performance and scalability issues if we test only its individual components.

    The answer is containers.

    To understand an application’s performance and scalability, we need to stress the Puppet masters with high load from the agents running on various systems.

    A genuine question is: Why use containers and not virtual machines (VMs) or just bare-metal machines?

    The logic behind running containers is related to the number of container images of a system can we launch, as well as their cost versus the alternatives.

    Reply
  28. Tomi Engdahl says:

    Container adoption still low barks Cloud Foundation
    Lots of bennies, but can be time-consuming
    https://www.theregister.co.uk/2017/09/11/container_adoption_still_low_says_cloud_foundation/

    It’s no secret that switching to containers is difficult. According to some IT pros contacted by containerization tech firm Cloud Foundry [PDF], it’s so difficult that their adoption is still dragging in the enterprise sector.

    The benefits of packing software and services inside of containers are clear: your code becomes much easier to install and run on multiple platforms, such as Amazon Web Services, Microsoft Azure and Google Cloud. The teeny tiny problem is that configuring and managing the services, instead of dropping it inside a resource-hogging but easy-to-use virtual machine, can be very time-consuming.

    Cloud Foundry Foundation
    Global Perception Study
    Hope Versus Reality, One Year Later
    An Update on Containers
    https://www.cloudfoundry.org/wp-content/uploads/2012/02/Container-Report-2017-1.pdf

    Reply
  29. Tomi Engdahl says:

    How to deploy Kubernetes on the Raspberry Pi
    https://opensource.com/article/17/3/kubernetes-raspberry-pi?sc_cid=7016000000127ECAAY

    In a few steps, set up your Raspberry Pi with Kubernetes using Weave Net.

    I learned a valuable lesson about developer workflow—track all of your changes. I made myself a small git repo locally and recorded all of the commands that I typed into the command line.

    Discovering Kubernetes

    In May 2015, I discovered Linux containers and Kubernetes. With Kubernetes, I thought it was fascinating that I could take part in a concept still technically in development—and I actually had access to it.

    At that time, Docker (v1.6, if I remember correctly) on ARM had a bug, which meant running Kubernetes on a Raspberry Pi device was virtually impossible. During those early 0.x releases, Kubernetes changed very quickly.

    I hacked my way to creating a Kubernetes node on Raspberry Pi anyway, and by the v1.0.1 Kubernetes release, I had it working, using Docker v1.7.1. This was the first fully functional way to deploy Kubernetes to ARM.

    The advantage of running Kubernetes on Raspberry Pi is that because ARM devices are so small they don’t draw a lot of power. If programs are built the right way, it’s possible to use the same commands for the same programs on AMD64. Having small IoT boards creates a great opportunity for education. It’s also beneficial for setting up demonstrations you need to travel for, like a conference. Bringing your Raspberry Pi is a lot easier than lugging your (often) large Intel machines.

    Distributed networking on the Raspberry Pi

    I discovered Weave Net through kubeadm. Weave Mesh is an interesting solution for distributed networking, so I began to read more about it.

    I’m excited for the possibility of industrial use cases for running Weave Net on Raspberry Pi, such as factories that need devices to be more mobile. Currently, deploying Weave Scope or Weave Cloud to Raspberry Pi might not be possible (though it is conceivable with other ARM devices) because I guess the software needs more available memory to run well.

    You can deploy Kubernetes to ARM with Weave just as you would on any AMD64 device by installing Docker, kubeadm, kubectl, and kubelet as usual on all machines.

    Reply
  30. Tomi Engdahl says:

    Containers Won’t Kill the Server OS
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1332378&

    Despite declarations it is dying or dead, the OS cockroach will continue to evolve, driven by a rising tide of adoption for containers.

    Virtualization. Cloud computing. Linux containers. All are groundbreaking enterprise IT innovations and all were thought to signal the extinction of the enterprise operating system

    Virtualization called for the end of the of the fully-featured, heavy operating system. For some time, it was unclear if server virtualization and operating system vendors would be able to work together to develop a process for slimming down operating systems to run with applications inside virtual machines.

    The cloud was said to be abstracting away the operating system. It let developers focus higher up the stack on applications, rather than worrying about infrastructure.

    More recently, it was thought that containers–lightweight standalone chunks of executable code–would take over the responsibility of the operating system. When applications moved inside containers, the operating system was no longer responsible for divvying up resources.

    However, each of these technologies has instead highlighted the continued importance of the operating system as a foundation for business IT. Rather than being a legacy albatross, the operating system is proving to be more of an un-kill-able cockroach.

    Containers point to future in which applications support multi-tasking and multi-tenancy in a distributed OS spanning clusters of hosts. Container operating systems will deliver only the components required to run a containerized application, weighing in at about a twentieth of a typical Linux distribution. This reduces overhead, simplifies maintenance and speeds the time it takes to develop and deploy containers.

    For perspective, Google uses containers (billions of them, in fact) and container operating system technology for Google Search. Few if any companies will ever reach Google’s levels of spinning containers up and down but most–if not all– companies will need the kind of efficiency and resilience that containers and container operating systems provide.

    Reply
  31. Tomi Engdahl says:

    10 layers of Linux container security
    https://opensource.com/article/17/10/10-layers-container-security?sc_cid=7016000000127ECAAY

    Employ these strategies to secure different layers of the container solution stack and different stages of the container lifecycle.

    Enterprises require strong security, and anyone running essential services in containers will ask, “Are containers secure?” and “Can we trust containers with our applications?”

    Securing containers is a lot like securing any running process. You need to think about security throughout the layers of the solution stack before you deploy and run your container. You also need to think about security throughout the application and container lifecycle.

    Reply
  32. Tomi Engdahl says:

    Docker gives into inevitable and offers native Kubernetes support
    https://techcrunch.com/2017/10/17/docker-gives-into-invevitable-and-offers-native-kubernetes-support/?utm_source=tcfbpage&sr_share=facebook

    When it comes to container orchestration, it seems clear that Kubernetes, the open source tool developed by Google, has won the battle for operations’ hearts and minds. It therefore shouldn’t come as a surprise to anyone who’s been paying attention that Docker announced native support for Kubernetes today at DockerCon Europe in Copenhagen.

    The company hasn’t given up completely on its own orchestration tool, Docker Swarm, but by offering native Kubernetes support for the first time

    Reply
  33. Tomi Engdahl says:

    Why containers are the best way to test software performance
    https://opensource.com/article/17/8/containers-software-performance-and-scale?sc_cid=7016000000127ECAAY

    Containers can simulate real-life workloads for enterprise applications without the high cost of other solutions.

    Reply
  34. Tomi Engdahl says:

    Containers aren’t just for applications
    https://www.redhat.com/en/blog/containers-aren’t-just-applications?sc_cid=7016000000127ECAAY

    Containers have grabbed so much attention because they demonstrated a way to solve the software packaging problem that the IT industry had been poking and prodding at for a very long time. Linux package management, application virtualization (in all its myriad forms), and virtual machines had all taken cuts at making it easier to bundle and install software along with its dependencies. But it was the container image format and runtime that is now standardized under the Open Container Initiative (OCI) that made real headway toward making applications portable across different systems and environments.

    Containers have also both benefited from and helped reinforce the shift toward cloud-native application patterns such as microservices.

    Reply
  35. Tomi Engdahl says:

    Google and Cisco announce hybrid cloud partnership
    https://techcrunch.com/2017/10/25/google-and-cisco-announce-hybrid-cloud-partnership/?ncid=rss&utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook

    Google and Cisco today announced a new partnership around helping their customers build more efficient hybrid cloud solutions. Unsurprisingly, given Google’s recent focus, this partnership centers around the Google-incubated Kubernetes container orchestration tool, as well as the Istio service mesh for connecting and securing microservices across clouds.

    Reply
  36. Tomi Engdahl says:

    Cisco, Google, sitting in a tree, C-L-O-U-D-I-N-G
    HyperFlex learns to talk Kubernetes for consistent hybrid cloud merriment
    https://www.theregister.co.uk/2017/10/26/cisco_google_cloud_partnership/

    Cisco and Google have struck a partnership to stretch Kubernetes from on-prem to the cloud and back again.

    The partnership will see the Chocolate Factory’s Kubernetes container orchestrator and Istio microservices manger integrated with Switchzilla’s hyperconverged HyperFlex platform.

    Kubernetes has all-but become the de facto standard for container-wrangling and cloud-native application dev, as even Docker now supports it. Google’s made sure its own cloud cloud runs it very well.

    Like other hyperconverged products, HyperFlex makes a virtue of behaving a lot like a public cloud, albeit with a smaller pool of physical resources. Extending to the G-cloud adds scalability and the chance to stretch applications from one’s own premises into the cloud without having to re-tool anything.

    Reply
  37. Tomi Engdahl says:

    First steps in integration of Windows and Linux Containers in OpenShift
    https://developers.redhat.com/blog/2017/10/23/first-steps-integration-windows-linux-containers-openshift/?sc_cid=7016000000127ECAAY

    This allows a true bi-modal IT technical implementation by combining the strength of both platforms into one cluster.

    Reply
  38. Tomi Engdahl says:

    How to make the case for Kubernetes
    https://enterprisersproject.com/article/2017/10/how-make-case-kubernetes?sc_cid=7016000000127ECAAY

    Need to convince people in your organization that orchestration tools like Kubernetes make sense for managing containers and microservices? We break it down

    Reply
  39. Tomi Engdahl says:

    Introducing CRI-O 1.0
    https://www.redhat.com/en/blog/introducing-cri-o-10?sc_cid=7016000000127ECAAY

    Last year, the Kubernetes project introduced its Container Runtime Interface (CRI) — a plugin interface that gives kubelet (a cluster node agent used to create pods and start containers) the ability to use different OCI-compliant container runtimes, without needing to recompile Kubernetes. Building on that work, the CRI-O project (originally known as OCID) is ready to provide a lightweight runtime for Kubernetes.

    So what does this really mean?

    Reply
  40. Tomi Engdahl says:

    ‘Lambda and serverless is one of the worst forms of proprietary lock-in we’ve ever seen in the history of humanity’
    CoreOS on AWS, Kubernetes, and more
    https://www.theregister.co.uk/2017/11/06/coreos_kubernetes_v_world/

    Toward the end of this month, CoreOS CEO Alex Polvi expects Amazon will introduce a managed Kubernetes service at its AWS re:Invent event.

    If so – CoreOS CTO Brandon Philips cites some Kubernetes bug reports from Amazon as evidence – it will be an admission of what most people focused on software containers already know: that Kubernetes has become the industry standard for container orchestration.

    After Docker’s announcement last month that it will support Kubernetes in its enterprise product, Amazon is the largest major cloud vendor that hasn’t yet made a serious commitment to the Google-spawned open-source project. It did however tip its hand by joining the Cloud Native Computing Foundation, which oversees Kubernetes, in August.

    “Kubernetes has clearly won the space,” said Polvi during lunch with The Register and other tech press at its San Francisco, California, headquarters.

    Polvi and Philips anticipate a Kubernetes colonization race, as enterprise vendors scramble to create the management layer for running containerized IT infrastructure.

    CoreOS is already on its way, with its Tectonic enterprise Kubernetes platform. So is Red Hat, with OpenShift. Google has GKE. Microsoft has AKS. IBM is offering its Bluemix, er, Cloud Container Service. Pivotal has PKS. Oracle has teamed with CoreOS. Cloud Foundry has Cloud Foundry Container Runtime. Cisco too has thrown its hat into the ring through a Google partnership. And the list goes on

    “What Kubernetes really solves is how do you run a ton of different applications with a consistent model,” said Polvi. “That consistency is what allows a company with 20,000 applications to have a small operations team running it all. Essentially you have software running these applications instead of humans doing it.”

    Polvi said the plan for CoreOS is to offer a path toward more automated IT operations on Kubernetes.

    “When the value of the software we’re selling you is the automated operations instead of the functionality of the code itself, like the traditional proprietary IP side of things, it means we’re aligned with open source,” Polvi said. “It means we can take upstream Prometheus and we want that to be as big and popular as possible, so that drives more demand for our code that runs your code.”

    Cannibalized by containers

    That may sound a bit like automation offered by the likes of configuration management toolmakers Puppet and Chef, but Polvi and Philips see those tools operating at a lower level: deploying apps. And containerization, they contend, is replacing that.

    “In the past, people hooked up Puppet or Chef to the CI/CD pipelines of their app and now they’re hooking up the Kubernetes APIs for the CI/CD pipelines to deploy a new version or for testing,” said Philips.

    Polvi described Puppet and Chef as languages to tell a computer how to run infrastructure. They have advantages for some operations teams and there’s no reason people can’t keep using them, he said. “But I think those companies need to keep a close eye on this [container-focused] world because a lot of the functionality is being replaced,” he added.

    That’s a better state of affair that the platform-as-a-service (PaaS) market. “I think PaaS is dead,” said Polvi. “That’s why you see OpenShift and Cloud Foundry and everyone pivoting to Kubernetes. What’s going to happen is PaaS will be reborn as serverless on the other side of the Kubernetes transition.”

    “Serverless is going on its own right now but the enterprise application of serverless will happen in the post-Kubernetes deployment phase of things,” he said.

    The problem with PaaS, as Polvi put it, is that it’s too restrictive and not broad enough. “It was never the entire way the company did business,” he said. “Kubernetes fixes that.”

    That doesn’t mean Polvi is a fan. “Lambda and serverless is one of the worst forms of proprietary lock-in that we’ve ever seen in the history of humanity,” said Polvi, only partly in jest, referring to the most widely used serverless offering, AWS Lambda. “It’s seriously as bad as it gets.”

    He elaborated: “It’s code that tied not just to hardware – which we’ve seen before – but to a data center, you can’t even get the hardware yourself. And that hardware is now custom fabbed for the cloud providers with dark fiber that runs all around the world, just for them. So literally the application you write will never get the performance or responsiveness or the ability to be ported somewhere else without having the deployment footprint of Amazon.”

    That, Polvi says, is why the open-source community has to provide alternatives.

    Reply
  41. Tomi Engdahl says:

    Docker Authentication with Keycloak
    https://developers.redhat.com/blog/2017/10/31/docker-authentication-keycloak/?sc_cid=7016000000127ECAAY

    Need to lock down your Docker registry? Keycloak has you covered.

    As of version 3.2.0, Keycloak has the ability to act as an “authorization service” for Docker authentication. This means that the Keycloak IDP server can perform identity validation and token issuance when a Docker registry requires authentication.

    Reply
  42. Tomi Engdahl says:

    Getting started with Kubernetes
    https://opensource.com/article/17/11/getting-started-kubernetes?sc_cid=7016000000127ECAAY

    Learn the basics of using the open source container management system with this easy tutorial.

    Reply
  43. Tomi Engdahl says:

    Containers and microservices complicate cloud-native security
    http://www.theserverside.com/feature/Containers-and-microservices-complicate-cloud-native-security?utm_campaign=Black%20Duck%20Press&utm_content=60709505&utm_medium=social&utm_source=facebook

    There’s not much new in the world of malicious hackers raiding online software. Most attacks follow the same basic approach, and software developers are leaving their applications open to being blindsided in the most benign and boring of ways. Developing applications with microservices and containers may be a modern approach to software design, but traditional software flaws still remain a problem when addressing cloud-native security.

    Reply
  44. Tomi Engdahl says:

    How to deploy Kubernetes on the Raspberry Pi
    https://opensource.com/article/17/3/kubernetes-raspberry-pi?sc_cid=7016000000127ECAAY

    In a few steps, set up your Raspberry Pi with Kubernetes using Weave Net.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*