Don Jones

Tech | Career | Musings

I’ve been getting a lot of questions recently around configuration management, and I wanted to jump in with something more comprehensive. A lot of us mean different things when we say “configuration management,” too, so I wanted to try and holistically address the whole spectrum.

First, let’s understand that configuration management applies very differently to servers and to endpoints. It’s amusing to think that they’re all just computers, but they’re not; they serve different missions, run different workloads, and have very different ideas about configurations.

In both of those silos, configuration management usually consists of numerous discrete sub-tasks, and honestly those tasks are themselves so important and distinct that they almost render the term configuration management useless. So let’s forget the term entirely, and focus on the actual outcomes that we expect from configuration management.

Configuration Inventorying and Asset Management

These two are distinct, but they’re similar, and tooling often conflates them. Essentially, this is the task of “figuring out what we have” and keeping track of changes over time. For some organizations, this is merely hardware inventorying; for many more, it includes details on installed software, configuration settings, and the like.

In the desktop world, System Center Configuration Manager is pretty much “it” as far as Microsoft is concerned, and SCCM does a good job of inventorying tens of thousands of desktops. Yes, it’s a huge, expensive, and complex product, but the job itself is huge, complex, and expensive. Aggregating data from huge numbers of nodes is hard to scale, and SCCM’s about the best anybody’s come up with. Extensions allowing it to support Linux and macOS exist, and for many Windows-centric organizations, it’s pretty much still the gold standard.

In the server world, the situation is a bit murkier. “Server world” these days needs to acknowledge the existence of on-prem and in-cloud assets; if you’re purely in-prem, then you’re probably using SCCM for inventories and treating servers as desktops. That’s fine, but only to a point. As organizations become more agile, and things like VMs and containers proliferate, you start to have to re-think. Given that a container has a lifespan potentially in seconds, do you inventory it? Or do you just inventory the underlying host? If software inventory is a concern, inventorying the host won’t tell you how many instances of software you have running in containers, for example. Nobody’s solved this, really; most are “solving” the problem through licensing terms that basically exempt per-container concerns once the underlying host is properly licensed. But even without containers, as you extend into the cloud and increase the number of VMs you run, inventorying becomes harder. Another way to approach this is to simply not care; if you approach things from a declarative configuration perspective, for example, then you basically start treating VMs like you would containers, and you worry yourself more about what the declaration requires. More on that in a minute, but it’s the haziness here that is why Microsoft (and indeed most other vendors in the space) don’t have good answers: the question itself isn’t clear.

Software Deployment

Again, in the Windows desktop world, SCCM is pretty much Microsoft’s storyline; similar products from other vendors take essentially the same approach, if you prefer them. Arguably, the SCCM-like approach is — while admittedly complex and expensive — about the best you can do for this complex, expensive task.

Servers, though? SCCM hasn’t ever been hugely popular for software deployment to servers (patches being a separate issue), and as organizations shift to hybrid on-prem/cloud environments, SCCM is even less well-suited. Desired State Configuration is touch-and-go with software deployment, depending on the software you’re deploying. So what’s Microsoft’s answer?

Again, I’m not sure the question is clear. We’re moving past the time when we treated servers exactly like desktops. Take Amazon’s Elastic Beanstalk, for example, which isn’t wholly unlike Azure Resource Manager templates. If you can specify a structured data file (JSON or XML) that describes what an environment should look like, and you just spin up the environment to look like that to begin with, then do you need software deployment? If you need a new package on a VM, you just add it to the specification file, and re-launch the entire environment from scratch.

This actually leads into the next one…

Configuration Settings

Group Policy is still Microsoft’s best answer for managing settings on desktops; SCCM hasn’t ever really addressed this space in a cohesive way.

On servers? Well, certainly GPO is an option. These days, the official Microsoft answer is probably Desired State Configuration, although DSC is still a little delicate, requires a pretty heavy lift due to the lack of formal tooling, and can require a lot of programming depending on what you need to configure. Microsoft’s official answer of “use Azure Automation” solves a small portion of the DSC tooling problem, but I think DSC’s architecture isn’t quite aligned to how people are managing servers.

Let me philosophize for a sec.

The mantra of DSC has always been Get/Set/Test, right? You Get the inventory to see what it is, Test to see if it’s what you want it to be, and Set it to what you want it to be. This involves a non-trivial amount of programming, but it’s re-usable programming, meaning you can see a return on investment. There’s a lot of tooling required. A big complaint around DSC is how you maintain repositories of configurations for different server roles, deploy updated configurations, troubleshoot complex configurations, and so on. Conflicts are discovered by the node at apply-time, which is about the worst time to do it in our traditional world.

But our traditional world is based on servers that last a long time. That is, we get something into a state and then we want to keep it there. Get/Set/Test. We need tooling to manage the state of things. We need tooling to send new versions of things to existing elements.

But if you look at where the whole industry is currently heading, and this is the whole thing, not just Microsoft, it’s moving toward smaller, shorter-lived servers. In may cases, we’re not even worried about the server per se; we just specify an operating system choice, some environment variables, and the software we want it to run. You don’t “manage the configuration” in that scenario; you just blow it away all the time and rebuild it to spec.

In a way, and perhaps ironically, it’s where we’re starting to treat servers more like we’ve treated desktops. I mean, suppose your work laptop has a problem. Does your help desk spend a lot of time troubleshooting it, or do they just blow it away and reapply the current gold image? Well, that’s basically where we’re moving servers, although we don’t use an “image” so much as an automated “setup script” a la Elastic Beanstalk or ARM templates. We can do that so much faster with servers (especially in the cloud) than physical desktops that it’s trivial in terms of work and time. So why bother checking to see if your server is in its “desired configuration” when you can just blow it up and make a new one that definitely is?

And that, perhaps, is where DSC will have value–less as a “desired state” thing, and more of a “better way to write setup scripts” thing. Spin up a new VM, drop a DSC on it, and let it build itself. This’ll require Microsoft to build a better, faster, multi-threaded LCM, and we’ll see if they can do that or not.

But I think the point of this is that, again, the question isn’t clear. We’re in an emerging new philosophy for server management, and if we don’t have the best tools and technologies, it’s because we haven’t really fully grasped the new problem space, yet. We’ve got tools that kinda-sorta work for the old way of management, and are kinda-sorta evolving into the new way, but we don’t fully know what the “new way” is.

personally find this to be an interesting and exciting time, because watching new things be born is fascinating. 

Thoughts? This is a space I’m going to continue thinking about, and I’d love to see your ideas, too. There’s a ton we’ve not even touched on, here, that I think deserves more coverage in the future.

Subscribe to the blog, if you haven’t, so you can keep up with the convo.

4 thoughts on “My View: The State of Configuration Management in the Microsoft World

  1. Hi Don,

    I think you’re right that there is a lot of changing with all the move to the cloud. But I also think that it will take at least 10 years until all On-Premise is moved to the cloud. – Or it will not happen for some cases.

    If a company has an own IT department which can run a service cheaper than a cloud provider with an availability that is sufficient for the customer, why change?

    It would be great if someone could provide ” a GUI ” for DSC. – Not because so many admins out there are afraid of command – line. I think many admins have to share their time for other things than pure system administration …

    Looking forward to read more on that topic 🙂

    1. Don Jones says:

      Yeah, I think “a GUI for DSC” is the biggest ISV missed opportunity of the decade.

    2. gaelcolas says:

      I’d only argue that modern IT requirements are not just to “run IT services with sufficient availability” anymore, but also to have a strong capacity to change (IT agility, as everything business is now related/dependant on IT, with increasing complexity).
      That’s what companies often overlook at great cost (i.e. cost of delay), and bear in mind cloud is a model, soon to become a commodity whether on-prem, hosted or public.

  2. martylichtel says:

    Spot on! I agree with your insights, Don. This is a pretty important thought topic about the future of config management and the differences between workstation and server management. I’ve also been thinking that config management of servers is moving toward more of “just ditch and rebuild” workflow as opposed to making configuration changes in-place. I’ll mention that with regard to Linux extensions for SCCM, Microsoft, at least, is dropping the native support (killing off the SCCM Linux agent) after the v1810 release.

Comments are closed.

%d bloggers like this: