Don Jones

Tech | Career | Musings

Back in the day… way back… computers took up entire buildings. For one computer. Everyone logged onto that same computer, and submitted work to it. The computer divvied its time up across the running jobs, and system operators (sysops) monitored that resource allocation. Nobody really knew where the computer “lived,” because they interacted with it by means of terminals.

Welcome back. Only now, we call it “cloud.”

You already know that everything old in IT will eventually become new again. We flirted briefly, and mostly unsuccessfully, with mainframe-like “thin client” computing back in the day. I’d argue that client computing will probably not become highly centralized except in some very limited and specific use cases, primarily because the computing power needed for today’s modern user experiences is just too great, and doesn’t lend itself to centralization.

But we’re obviously seeing a resurgence in the centralization of more traditional computing workloads, meaning server workloads.

Now, you might argue that a “cloud” consisting of tens of thousands of commodity servers doesn’t a mainframe make. I think I’d disagree. I ran a mainframe as one of my first IT jobs, and it consisted of pools of processor, memory, and storage resources (it didn’t have a network). We could add processors – albeit for wildly insane amounts of money. Just because today’s “processor pools” are spread across chassis isn’t really a distinction for me. When you manage a cloud, you manage it a lot like a mainframe. You move workloads from one processor pool to another, for example. Each processor pool (server chassis) has a fixed amount of memory that can be divvied up, and it all talks to an essentially infinite supply of more-or-less common storage.

In the old mainframe days, people didn’t worry much about where their data was stored, or how secure it was. One of the first big successes in “cloud computing” was payroll processing, where companies transmitted tons of personally identifiable information to the “cloud.” Of course, back then it was a lot easier to control and monitor connections to the cloud, since it was all over virtual private network lines – e.g., dial-up, which was in fact our first use of “cloud.”

In the old days, we also didn’t have a lot of legislation acronyms – HIPAA, SOX, GLB, and more (and that’s just in the USA) – to worry about. Today, it’s the legislation as much as anything that makes cloud computing more difficult, because we do have to worry about where data is stored and who has access to it. That isn’t an insurmountable problem, though – it’s just one that takes time to work through.

The old days didn’t worry as much about downtime, either, because those terrifically expensive machines didn’t break down all that much. And when they did, everyone was offline, so you had company for your misery. And, not many people were running their entire business and lives on the mainframe, so you got by with whatever bits were “missing” during the outage. Today, availability is a really big deal – but it’s made easier by the fact that we can chuck a huge amount of cheap hardware at the problem, providing enough redundancy to make the availability we need.

Managing today’s mainframe is a lot more complex than yesteryear’s management, because the mainframe is vastly, massively, hugely, OMG-ly larger. It doesn’t have four processor cores, it has millions. But many of the core management concepts and goals are the same: optimize the workloads across the resources. A lot of automation goes into making that optimization happen by itself, which is a big plus over the old days when I was manually boosting important jobs, pausing others, and reassigning hardware resources on the fly.

Everything old is new again… for a reason. We didn’t move away from mainframe computing because it was bad. We moved away because we just couldn’t build large enough machines to affordably carry out the workloads we wanted to run – especially graphically intensive client workloads. What we’re seeing now isn’t a total flip back to centralized computing – the fact that we’re not all running our desktops on some cloud-based VDI proves that isn’t yet a wholly viable model. Instead, we’re intelligently relocating workloads, and we’ve created the technology to build ridiculously cheap, globe-spanning mainframes to handle the “server” side of the equation. We solved the original problem with mainframes, in a lot of ways. And sure, today’s “cloud” isn’t a single monolithic machine… but it’s a neat aggregation of smaller machines that can work in much the same way.

Welcome back.

%d bloggers like this: