Linux at 25: Why It Flourished While Others Fizzled – IEEE Spectrum

Linux at 25: Why It Flourished While Others Fizzled – IEEE Spectrum

http://spectrum.ieee.org/computing/software/linux-at-25-why-it-flourished-while-others-fizzled

Linux was started 25 years ago in Finland by Linus Torvalds. Look at the history and reasons why it started revolution. Today linux rules on servers, smart phones and embedded systems.

After that you should also read Linux at 25: Q&A With Linus Torvalds article where the creator of the open-source operating system talks about its past, present, and future. Linus Torvalds created the original core of the Linux operating system in 1991 as a computer science student at the University of Helsinki in Finland.

 

 

 

1 Comment

  1. Tomi Engdahl says:

    Linux is so grown up, it’s ready for marriage with containers
    Beats dating virtualisation, but – oh – the rules
    http://www.theregister.co.uk/2016/04/07/containers_and_linux/

    Linux is all grown up. It has nothing left to prove. There’s never been a year of the Linux desktop and there probably never will be, but it runs on the majority of the world’s servers. It never took over the desktop, it did an end-run around it: there are more Linux-based client devices accessing those servers than there are Windows boxes.

    Linux Foundation boss Jim Zemlin puts it this way: “It’s in literally billions of devices. Linux is the native development platform for every SOC. Freescale, Qualcomm, Intel, MIPS: Linux is the immediate choice. It’s the de facto platform. It’s the client of the Internet.”

    Linux is big business, supported by pretty much everyone – even Microsoft. Open source has won, but it did it by finding the niches that fit it best – and the biggest of these is on the millions of servers that power the Web. Linux is what runs the cloud, and the cloud is big business now.

    But VMs are expensive. Not in terms of money – although they can be – but in resources and complexity. Whole-system virtualisation is a special kind of emulator: under one host OS, you start another, guest one. Everything is duplicated – the whole OS, and the copy that does the work is running on virtual – in other words: pretend, emulated – hardware, with the performance overhead that implies. Plus, of course, the guest OS has to boot up like a normal one, so starting VMs takes time

    Which is what has led one wag to comment that: “Hypervisors are the living proof of operating system’s incompetence.”

    Fighting words! What do they mean, incompetence? Well, here are a few examples.

    The kernel of your operating system of choice doesn’t scale well onto tens of cores or terabytes of NUMA RAM? No problem: partition the machine, run multiple copies in optimally sized VMs.

    Your operating system isn’t very reliable? Or you need multiple versions, or specific app versions on the operating system? No problem. VMs give you full remote management, because the hardware is virtual. You can run lots of copies in a failover cluster – and that applies to the host hardware, too. VMs on a failed host can be auto-migrated to another.

    Make no mistake, virtualisation is a fantastic tool that has enabled a revolution in IT. There are tons of excellent reasons for using it, which in particular fit extremely well in the world of long-lived VMs holding elaborately configured OSs which someone needs to maintain. It enables great features, like migrating a live running VM from one host to another. It facilitates software-defined networking, simplifying network design. If you have stateful servers, full of data and config, VMs are just what you need.

    And in that world, proprietary code rules: Windows Server and VMware, and increasingly, Hyper-V.

    But it’s less ideal if you’re an internet-centric business, and your main concern is quick, scalable farms of small, mostly-stateless servers holding microservices built out of FOSS tools and technologies. No licences to worry about – it’s all free anyway. Spin up new instances as needed and destroy them when they’re not.

    Each instance is automatically configured with Puppet or Ansible, and they all run the same Linux distro – whatever your techies prefer, which probably means Ubuntu for most, Debian for the hardcore and CentOS for those committed to the RPM side of the fence.

    In this world, KVM and Xen are the big players, with stands and talks at events such as LinuxCon devoted to them. Free hypervisors for free operating systems – but the same drawbacks apply

    And the reason that everyone is talking about containers is they solve most of these issues. If your kernel scales well and all your workloads are on the same kernel anyway, then containers offer the isolation and scalability features of VMs without most of the overheads.

    We talked about how they work in 2011, but back then, Linux containers were still fairly new and crude.

    Since then, though, one product has galvanised the development of Linux containers: Docker.

    None of this means the end of “traditional” virtualisation. Containers are great for microservices, but at least in their current incarnations, they’re less ideal for existing complex server workloads.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*