Don Jones

Tech | Career | Musings

As I’ve written here before, I do believe that the current concept of “operating system,” at least with regard to servers, is on the way out. Not quickly, of course – these things take time – but faster than you might think. So I wanted to take a moment to perform a little thought experiment: what’s the minimum an “operating system” really needs to do? How much of our current “OS” thinking can we just ditch outright?

Interfacing with hardware. This is obviously the classic thing that an OS does, but I think it’s worth reviewing what hardware, exactly, we need to interface with. Ignoring specialty situations, I think we can ditch everything but network adapters. Seriously. There should be no need, on a modern server OS, to create visual displays locally. No need for local typing. No need for mousing. These machines, or virtual machines, should be “headless,” which means all they need to know is how to talk to a network – which would include a SAN for their storage. So, “network interface” will be a thing. I suspect, in fact, that this will be the only variable component of an OS, since we’ll continue to have different network adapter hardware out there. We’ll also need some very simple storage interface, although I suspect for the OS itself that’ll be more or less limited to reading the OS itself off of solid-state storage more akin to firmware than today’s “system drives.” Again, more on that in a sec.

Thread scheduling and memory management. The “kernel” of any OS (heh), this will be where modern operating systems actually compete. I foresee an end to the current kind of virtualization (more on that in a moment), meaning a modern server OS will predominantly host containers, meaning its ability to multitask different processes will remain crucial.

The above would include everything needed to talk to the network, to read the OS itself from storage, and run processes in containers. Honestly, such an OS will be so small that we’ll probably treat it more as firmware than anything else.

So where did the rest of the OS go? In a container. As containers continue to mature, they’ll encompass everything the contained application needs to run aside from network communications (which require hardware). You app needs to access storage? Fine, include “libstorage” as an application dependency. That library provides an abstraction – an API – between your application and the underlying network that can access the SAN. But wait – pretty much every application needs storage, right? Why not just include that in the OS and save space?

This is what we’ve told ourselves for decades, now, building heavier and thicker operating systems, and it’s time to stop. The reason is that not every application needs storage. Not the same kind, anyway. Some applications only need “libsqlserver” to speak to a SQL Server; others need block storage. Some – say a time server – might not need storage at all. Some will be written against the faster “libstorage2” library. So as many of an application’s needs – dependencies – as possible come with the application and live in its container. The container is not an operating system; it is a collection of software libraries that run on a computer. The OS is the bare minimum necessary to get it all running.

Won’t that “base” OS – that stripped-down thing actually running on the hardware – need a way to access storage, so it can load these containers? Perhaps. But probably not. The containers will probably live in some network repository, and likely be “read” directly from there. So the base OS will need networking, which we’ve already said, and some means of talking to software repositories. That latter bit, which “bootstraps” containers, won’t need to be all that large, but it’ll need to be part of the base OS, yes.

With so little else to do, this new base OS will focus much more on process separation than even today’s OSs. The main function of the OS will be to make sure containers cannot see or touch one another at all; processes will communicate, when needed, by using message queues and other network-based mechanisms, which can be secured through whatever software is managing those services. All of that eases scalability, by the way, and essentially dumps our current concept of “cluster” for something more flexible and agile. Containers will likely be able to request access to the network, as well, for synchronous communications with other processes, and the base OS will be able to handle that access to the network hardware.

Virtualization as we know it will die off, too, save for legacy situations. Why emulate hardware to run a thick operating system when you no longer need a thick operating system, and when most of the “hardware” we emulate today has been turned into a network service?

What about some kind of shell to configure and troubleshoot the base OS? I dunno. We’re not talking about the base OS actually having much to configure. Sure, it’ll need to expose some kind of network-based management, so you can tell it which containers to run and so on, but I imagine much of its “configuration” will simply be delivered at boot time by some expanded kind of DHCP option set. “Welcome to the network, here’s your IP address and all your operating instructions. Have fun.” Most of the “intelligence” of the network – telling servers what to run, when to reboot (so they can refresh their config), and so on, will come from distributed software running in its own container. If the base OS isn’t persistently storing anything locally – and why would it? – then it doesn’t even need to be “secured” in the current sense of the term. Once it loses power, it’s just generic processor and some RAM. Off its home network, it can’t even access container images or its configuration. It’s exactly the same as it was when you got it from Dell and unboxed it the first time.

All of this is already happening. It’s happening in stages, but the path is clear.

Categories: Tech

4 thoughts on “A Minimal Server OS

  1. Vyckou says:

    Great post. I am glad that old fashioned Windows server OS is fading away.
    I have been working with windows servers for 18 years yeah with full blown GUI, bunch of worthless services and process to keep the “administrator” in the comfort zone. Recently I have moved to devops – which means now I am with everything, Linux, windows,.Net,Python, MS SQL, Cassandra and so on. In 6 month I have understood, that Linux servers is much more easier to troubleshoot. They are lightweight. There are no creepy processes which crawling on your file system, or handle complex workstation workloads, just to enable you click around with the mouse. Noise on Linux is bare to minimum if compared to windows. Old fashioned Windows server OS was and still is too heavy. I would not even imagine to run a hell of the load of containers on windows server OS. Nano – possibly as this piece of os has nothing on top, as Linux for last decades. But it will need time to mature. Server has to do the job – keep the focus on its purpose, hosted applications, data and not to make admin or devops happier while logged on- for that you have a laptop 🙂

  2. This is what I thought was going to be the promise of Nano, but that seems to have changed with the abandonment of Infrastructure roll.

    Would the base OS network stack need some sort of firewall for segmentation? or would that be handled by the containers or network devices?

  3. Dave Hoyla says:

    I believe Microsoft is on the track to make the server OS more lean. Project Honolulu from Microsoft is likely a step towards that goal.

Comments are closed.

%d bloggers like this: