Why Stores “Penalize” You For Not Using Your Gift Card Within a Year

I used to work for a retailer, and we issued gift certificates – these days, they’d be preloaded gift cards. Except back then, certificates never had an expiration date.

We hated our gift certificates. And the reason why might surprise you. Today, a lot of retailers start charging fees after a gift card has been unused for a year. Most folks see that as a “penalty” for not using the card, and assume that the motive is sheer corporate greed.

Not exactly.

While I’m not saying it’s any more fair, the reason has to do with accounting. A gift card isn’t a sale by the retailer to you; it’s a “tender exchange.” Meaning, they’ve taken cash, but they can’t count it as revenue because they’ve issued a debt instrument (the card) to you. That card represents a liability until it’s expired, cancelled, or fulfilled – and the accountants have to account for it every single year. It makes the books look funny, too, and a big enough liability like that can make it tough to sell a company, because a potential buyer might look at the outstanding debt vs. the company’s ability to fulfill it and get tweaky.

The fees aren’t so much a penalty as they are an incentive. The merchant really really really wants you to use the card so (a) they can finally count that money as revenue and (b) get the debt off the books.

Again, not saying it’s any more fair, but the motive isn’t purely corporate greed. Call it corporate laziness, if you prefer, as that’s more accurate.

Obviously, some jurisdictions disallow these no-use charges, and some retailers do a better job of disclosing them up front than others.

Frankly, I’ve been in conversations about this with a few retailers, and if they could come up with a better incentive to make you use the card, they probably would. I know one or two smaller ones have tried offering a small discount on your purchase when you use the card within a year (“turn your $100 card into $110 if you use it now!”); those typically don’t have huge success because the discount isn’t usually easy to communicate (people like to buy cards with fancy designs on them, not marketing text).

Keep in mind that a lot of retailers hope your gift card will go to someone who isn’t already a regular customer, thereby earning them the chance to bring in a new repeat customer. So they don’t, in most cases, want to penalize the recipient… they want them to bloody show up and spend the money, already.

BTW – one reason the first folks started charging those “no-use” fees (which are often a few dollars per year) is because there was once talk about letting a consumer cancel a gift card, and get their money back with interest (!!!). Treating the card as a loan instrument, in other words. In that scenario, charging the no-use fee was intended to make the card look less like a loan to regulators, and more like a carrying service. That loan thing never happened anyplace that I’m aware of, but once they had the fees on the books, stores obviously felt inclined to leave it there.

And yeah, I’m sure they don’t hate the income. Remember, I live in Vegas, where thousands of dollars in unredeemed slot machine tickets fly out of town every week. Those expire after a couple of weeks, and we get to keep the cash. We dump it into the pool at Luxor, and all the locals get together once a month and swim in it, while chuckling madly.

Anyway… not saying it’s better or worse, but thought you might appreciate knowing the backstory.

5 Points Each if You Know These People (Throwback Thursday)

Can you name all four people in these photos?

Yeah, when I was high school-aged, I used to help run some of our local Star Trek conventions, such as Sci-Con, Beach Trek, and others. You definitely want to ask me about Tasha Yar’s underwear sometime. At a bar. You’re buying.

The Myth of the Homogeneous Environment

For as long as I can remember, organizations have tried to achieve the Holy Grail of technology: a homogeneous environment. More on the desktop side, perhaps, but having one-and-only-one of something in the datacenter was always seen as desirable, too.

You’re probably familiar with the alleged benefits. The big one was “cheaper to support,” on the theory that only maintaining one of something is cheaper. You only train for one thing, you maintain a set of tools for one thing, and so on. You can support things faster, because you only have to keep one set of rules and issues and whatnot in your head.

Some organizations took homogeneous pretty far, trying to lock down their environments as much as possible to prevent drift, variation, and chaos. As a sidebar, I know that lock-down can be useful for security, compliance, and other purposes; I’m not addressing those. I’m talking about homogeneous desktops solely for the sake of sameness, and for no other benefit. I worked in more than a few ships where that was the goal, long before ‘compliance’ was even a concern.

The problem is that very few shops of any size ever actually achieved a fully homogeneous environment. There was always a set of edge cases – Macs for the design department, wacky printers, executives who insisted on a different computer brand because it was available in black instead of beige (this happened to me; we seriously considered spray-painting a Dell).

The common-sense benefits of homogeneity were so… well, obvious, that most of us just went right down that path. We were then hit by the career-switcher rush, in the early and mid 1990s, when a lot of IT folks started moving into IT. Some of those “switchers” brought very little IT experience, and so it was an effort for them to be effective in supporting one technology, let alone dozens… and that reinforced the “benefit” of a homogeneous environment. In other words, it wasn’t cheaper to support, it was just the only thing we could support, because we were working with relative newcomers who had smaller skill sets.

Unfortunately, it was all a pack of lies.

I’ve never been able to find a credible study that showed it was cheaper to support a homogeneous environment than a heterogeneous one. The “obvious, common sense” of the argument for homogeny didn’t play out very well except in the very smallest environments. Believe me, I have had a tremendous problem coming to grips with this because the argument for homogeny is just so… well, so duh. I mean, of course it’s cheaper to support one thing instead of twenty things, right?

Apparently not. You see, most organizations already spend so much more than they should when it comes to per-unit (desktop or server) support, that the ‘extra’ costs of supporting another OS become almost meaningless.

Look, if you’re a single person, then the argument works. “I know how to support Windows, and I know it well. Ask me to support Linux, and I’m going to be a lot slower, and you’re not going to like that. I’ll miss little things, because I don’t know Linux.” Except that attitude doesn’t actually scale to organizational levels. First, we could always hire a Linux person. It turns out that, in most organizations, if you right-sized the skill mix of your team, you’d get about the same size team and be able to support whatever you wanted. Yeah, dragging 1000 Macs into an all-Windows environment would be disruptive, but if you brought in 1000 new clients you’d have to bring in new support staff anyway, so they might as well know Mac.

BTW, I’m not suggesting all operating systems are equal when it comes to manageability. I firmly believe Windows is far superior an operating system when it comes to “being managed by an IT team.” The homogeneous argument is used in all-Windows shops, too – it’s why companies want “all XP” or “all Windows 7” with no variation. Once you acknowledge that Windows is a damn good enterprise client OS, why stick with just one version? The homogeny argument. Anyway…

I recently worked with a couple of customers on a fairly extensive study that compared large (5,000+ seat) departments, some of whom were homogeneous on the desktop, and others who, for operational reasons, were mixed-OS (the mix was often Windows versions, not necessarily non-Windows machines). Support cost difference between them? Zero.

We did find a massive difference between the homogeneous environments and the non-homogeneous ones. Let’s call them the “Samers” (all running one client OS of a single version), and the “Differs” (running a mix of operating systems and versions). That difference: The Differs could incorporate new technologies into their environment massively faster, with less disruption. They were more flexible. Having accepted the need to be Differs, they’d put tools, processes, and training in place to accommodate that need. Their IT folks were generally fearless of new technology; the Samers tended to argue against new technology as much as possible, holding to the “one” thing the were used to.

Take a population of cats, inbreed them enough, and you’ll start getting sick cats. Let them mingle with other populations, and you get genetic diversity that makes them stronger, healthier, and all that. It’s well-known – but it turns out the concept applies to IT as well.

IT teams that thrive on technical diversity are, quite simply, better IT teams. Natural selection ensures that those teams don’t include many less-than-talented people, because you have to be smart just to get by every single day. They spend more time formalizing processes, patterns, and tools that help them manage a diverse environment – so when the CIO wants to bring in some wacky new thing to support a business need, they just shrug and get on it.

This isn’t, of course, saying that everyone who works in a Samer environment is stupid and lazy – but Samer environments allow you to be so, if you’re of a mind. A lot of Samer environments can be so unchallenging (which is the point of homogeny, remember) that the really talented folks get bored and leave before long. It’s tougher to “keep up” in a Samer environment, because you’re just not exposed to much. You’re challenged less.

Differ environments also, I believe, have fewer IT-related political issues. They tend to concentrate more on using technology the way it was meant to be used, because they’re already challenged by the diversity they have to manage. That means technology isn’t held up (as much) by the squishiness of fiefdoms and other political crap that takes place in corporations.

When working with Differs, I find that the end users are often a bit more flexible, too. One company, for example, ditched mapped drives a decade ago. They stood up a DFS tree and made everyone memorize friendly UNCs instead of drive letters. Everyone complained for a minute, and then moved on. Why did they do it? They’d brought in a bunch of Unix boxes to do graphics rendering, and those machines don’t really do mapped drives the way Windows does (drive letters have, after all, been ridiculous for almost 20 years now). So rather than maintaining two sets of standards and processes, they changed the processes to accommodate the different technologies. They didn’t struggle to jury-rig drive mappings onto Unix – they just walked away from what was, to them, an outdated and single-platform concept. Both Windows and Unix treat UNCs the same, so they went that way.

So, what kind of environment do you work in – Samer or Differ? And what kind would you prefer to work in, if you could work anyplace? And how many of your “technology” problems would go away if you worked in an environment where processes, concepts, and users could change to better fit the way the technology works?

Here’s Why YOU Should be Blogging

On a recent PowerScripting Podcast episode, I brought up the topic of PowerShell Desired State Configuration, and pointed out that I thought folks should be learning this important technology. There was a bit of a debate – some folks disagree, of course. But putting that aside, at least a few folks agreed with me and set out to learn the technology.

Jacob Benson was one of them, and he’s been blogging – daily – about his learning experience.

It aptly demonstrates why I think everyone should have a blog – even if it’s a private one that nobody else sees, but I really think anyone working in IT should have a public blog. Writing something down reinforces it in your head. The act of writing forces you to organize your thoughts; Benson’s been doing it chronologically, which is fine. He’s documented his failures, too, which is going to prove useful to someone else in the future, because he also documents his fixes.

You know that you truly know something when you can teach it to someone else – and forcing yourself to write down what you’ve learned proves to you that you’ve learned it that well. Blog selfishly: make every entry a reminder of what you’ve learned, and you’ll always have a reference to go back to. If you’re at work, blog there, too. Use SharePoint or something to document problems you’ve solved in an informal fashion; that can supplement help desk ticket notes and serve as a long-term reference for others in your environment.

And you do have time. Remember that those who forget history are doomed to repeat it, and that’s never been more true than in the IT industry, where I see people constantly making the same mistakes over and over. If you don’t document history, everyone forgets it. You can’t afford not to.

Sure, other people don’t read. I get it. I’m constantly re-answering questions that I’ve already answered in a book or something. But that’s often because people aren’t used to other people writing stuff down, so they don’t bother to look. You can help cure that. When you “re-answer” a question, politely refer the person to whatever writing you’ve done on the subject, so they start getting used to looking for those answers.

Don’t like to write? Fine – start a YouTube “blog” instead. Get a screen recording application, a decent microphone, and record your learning experiences. They don’t need to be pretty – they need to be helpful. 

We live in an age of unprecedented ability to document and retain information forever. What are you doing to help build the knowledge base?

Off-Topic: My RV Years

This is my personal blog; the amount of business-related stuff you see in here simply tells you what my life tends to look like :). I work a lot.

But not all the time. From sometime in 2001 until sometime in 2003 – almost three years – I lived in an RV. A 40-foot fifth wheel, towed by a dually truck (starting with a Chevy gas and then moving to a GMC diesel). In those years, I worked a good bit – but I did so whilst traveling. We stayed in one place for anywhere from a week to six months, depending.

Being who I am and doing what I do, writing a book about it seemed in order. And so my partner and I sat down and reminisced, and pulled together a book – which you’re welcome to read. It’s mostly funny – there were some odd times, let me tell you. Some of these are still the stories I tell around the bar.

It’s definitely a bit of an outsider’s tale, at least at first – we kind of went into the trip with a set of expectations and concerns. We finished with a completely different view of the country we live in, and its people, and it isn’t – shock of all shocks – the one you typically see on CNN or Fox News or MSNBC or whoever. In fact, the use of the word “zoo” in the title was very much because we expected to be touring and “looking in,” when in fact that didn’t always turn out to be the case at all.

Anyway, this is completely a personal post – but I’ve made a lot of friends who’ve been agitating to read this, and so I figured I might as well put it up. If you’re mainly interested in my technology doings, you can safely ignore this ;). If you’ve mainly been interested in my technology doings, but you’re thinking, “hey, I’ll read it,” just know that – like you – I’m a different person “at work” than “at home,” and this reflects that. Be forewarned.

ZooBook (PDF – note that this is the original formatting for Lulu.com, which is why it’s an odd page size.)

Personal Rant: In Defense of Barbie

I’ve no idea if Barbie is a good or bad role model for young girls. She certainly has had a lot of careers, and I can only hope she’s earned equal pay for them. Yeah, she has a physically impossibly body shape. Know why? She’s a plastic doll and she has to be able to fit into doll clothes. Which are made from full-size fabrics, by the way, which doesn’t work the same at Barbie sizes.

But if Barbie has given your little kid a bad body image, that’s your fault. How the heck did you raise a kid to believe that a plastic freakin’ doll was an appropriate thing to admire? I mean, nobody’s blaming SpongeBob Squarepants for giving kids a bad body image. I never felt bad about myself after looking at a Skeletor doll. Sorry, action figure.


View full post

Just finished last video of 90-video, 30+ hour @CBTNuggets series on #PowerShell v2/3/4. Yup, covers every version. Give it 24-48 hours to upload and go through approvals, then check it out. http://www.cbtnuggets.com/it-training-videos/course/cbtn_pwrshl_master.

Want to Bring Me in for a PowerShell Class?

I’ve had a few folks, over the years, bemoan the fact that I’m not holding a class in their area. Fact is, public classes involve a lot of financial risk, and I’m simply not a skilled marketer. That makes it really hard for me to run public classes anywhere – let alone in some of the more far-flung locations I’ve been asked to visit.

But I’d still love to do classes – and I think I might have an idea for making everyone happy.

First, figure out a venue near you, or near a large population enter near you. “Venue” can simply be a hotel meeting space that holds 15-20 people. We can do a “bring your own laptop” class, and don’t need to rent a classroom that has computers.

Second, get in touch with me. We’ll work out dates, estimate travel expenses, and come up with a rate for you. Know that I tend to book 6-8 months out, so you do need to be planning ahead. I am not available “in a couple of weeks.” I never am. Plan for a 5-day class – I can provide you with a detailed outline.

Third, now that you have a place in mind and a total class cost, start an IndieGoGo campaign. I’ll run classes of up to 20 students (we can do more, if we discuss it in advance). I’ll help you figure out a minimum contribution, and we can set up tiers with some nice spiffs (pay a bit extra, get a full set of signed books in addition to class, that kind of thing). BTW, for US domestic classes, I find that 7 students at $2500/ea is about break-even; internationally, it’s closer to 12 people at that rate depending on the country. We can work out a reasonable rate, too – in some countries, training centers charge the equivalent of $5,000/student for my week-long class, because that works out to a customary fee in that country. Point being, it’s a “do-able” number of people in most cases.

Then you and I market the class together. The nice thing is, if the class fills to a minimum level, we run it. If it doesn’t, everyone gets their money back, and we don’t run it. No financial risk for anyone. You’ll have to work hard on getting folks to sign up for the class (I can provide some ideas), and together we’ll try to make it happen.

So if you’d like to have me out for the best damn PowerShell class of your life, get in touch. Let’s try and make it happen.


How Microsoft Could Kill the Client

I’ve recently felt that Microsoft wasn’t investing as heavily in the Windows client OS as they were in the server. I know a lot happens under the hood that isn’t immediately visible, but at the same time, apart from the Windows 8 massive UI flip, it seemed to me that the client OS just wasn’t going anywhere.

I’ve also been a bit (pleasantly) surprised at Microsoft’s embrace of non-Windows operating systems, especially when it comes to mobile. Office on my iPhone! Mac subscriptions for Office 365! It’s as if these products were told, “look, don’t rely on the Windows monopoly to make you your money – go make it wherever you can find it.” A great decision, but not identical to what the company’s best known for.

Then I’ll see technologies like Desired State Configuration come along, which are being engineered almost from a purely server-focused point of view. It occurred to me that Microsoft’s new “cloud first” engineering, wherein they build features for Azure and other cloud-based services first, and later migrate those features to our on-premises products, might have an interesting connotation for the client. After all, Azure isn’t used to run Windows 8. It can, but that’s not it’s big deal.

So I started wondering, “what would a world look like in which Microsoft owned the back-end, and didn’t really care about the client?” I came up with a pretty plausible scenario.

This is going to take some explaining.

First, What We’ve Done Already

The client has had attempts made on its life in the past. “Thin client” was basically a way of saying, “the client does’t matter; we’ll run it all in the back-end.” Thin client as it has been done, however, has two problems: first, it doesn’t work well for disconnected machines like laptops on airlines; second, it really just moved the computing power from the desk to the datacenter, without really changing what was happening. In other words, we couldn’t reliably get better computing density with thin client. So it tends to be used for niche applications within an environment.

Density is the big deal, there. Remote Desktop Services (RDS) is great, but it can’t run every app. Virtual Desktop Infrastructure (VDI) can be great, but it ends up using more-expensive computing resources to do the same thing, mainly because now you’ve not only got to run the desktop, you’ve got to emulate the hardware as well. More on why what’s coming is so different, a bit further down.

Add a Drawbridge

That’s why the Microsoft Research Drawbridge project is so fascinating to me. I’ve taken a whack at explaining this before, but let me try at a higher level. This is necessarily going to involve some oversimplification, but for this discussion we’re concerned about the net result more than how it’s accomplished.

Traditional computers of all kinds have had a single consistent low-level API that interfaces with hardware: the BIOS. Linux, Unix, Windows, and Mac all run on essentially the same hardware, because they’re all programmed to talk to the same super-low-level APIs. Those operating systems all take wildly divergent approaches from there, but they all have a single common goal: to make it easier for developers to write applications.

Look, let’s be honest. Nobody cares much about what we call an OS. We care about what’s running on the OS. We want email servers, Web servers, database servers, word processors, spreadsheets, and whatnot. Yes, the user interface differs, but UI is really just another application running atop the OS. There’s no reason a Linux build couldn’t look exactly like Windows – and folks have made ones that come really close. So we need to define some new terminology:

  • The OS is the stuff we don’t actually care about, like how bits get on and off of disk, in and out of memory, and so on.
  • The personality is the stuff we do care about, including application frameworks, APIs, UI, and so forth.

Today’s operating systems combine both OS and personality: when you install Windows, you not only get the low-level stuff, but also the visuals, the frameworks, the APIs, and all that. Under the hood, there’s actually somewhat of a separation between the two. For example, most of the personality is provided by the 100k+ Win32 APIs, which themselves talk to the lower-level don’t-care-but-need-to-have-it bits.

Drawbridge is an attempt to more firmly separate the two. As an OS, you can think of it is a kind of souped-up BIOS firmware package. It knows how to talk to hardware, and it might know how to do things like authenticate you and hold auth tickets in memory. It could probably talk to the network and perform other important “I don’t care how you do it, just get it done” tasks. It then exposes those capabilities as a set of services using a standardized API – just like the BIOS of today exposes its services via an API. Frankly, most developers probably wouldn’t want to write to that low-level of an API, any more than they do today.

So atop that core OS, you build personalities. Your Windows personality might include all of the Win32 APIs, reprogrammed to talk to the Drawbridge OS instead of two whatever’s underneath them today. You could make a Linux build that did the same thing. When an application runs, it simply links to the personalities it was built for.

A modern Web server provides a good example of this. Think of the Web server as the low-level OS, exposing its capabilities through APIs like FastCGI. On top of that, you could add personalities like PHP, Python, or ASP.NET. Developers would then build apps on those. When you needed to “run” a page like Users.php, the Web server loads up the PHP “personality” and then runs the Web page. Drawbridge is the same idea, only on a larger scale. Linux processes run next to Windows ones, without the need for a traditional virtual machine. Yes, it would be a specialized version of Linux or Windows, but there’s no reason the Drawbridge APIs couldn’t be made public, so that all “personalities” could be written for it. Provided Drawbridge stayed low-level enough, everyone could probably agree on core services it would provide to upper-level personality stacks. And frankly, from Microsoft’s perspective, it wouldn’t matter terribly much if Red Hat or Apple got on board; Microsoft could write personalities for whatever API stacks their customers wanted. Nothing stopping MS from refactoring, say, CentOS, right?

Talk About Flexible

One service a Drawbridge OS could provide is the ability to skim off the user interface of any running process, serialize that into a data stream, and send it to a remote client. This is basically what Remote Desktop Protocol (RDP) does today. After all, at a low level, application data eventually has to make it to a device driver for display on the screen. Intercept at that level, and send the “drawing instructions” to a remote client, and let the remote client draw the user interface instead of the machine running the process. That’s exactly how RDP works, in fact (and so do ICA and other remote-control protocols). Sound and other services could be done the same way. After all, they all end up interacting with hardware, and if the low-level OS is in control of the hardware, it could simply send the instructions to remote hardware.

That means individual processes become less tied to the hardware they’re running on. Just as we migrate VMs today, we might migrate processes tomorrow. API stacks like Win32 might need to be modified to understand that the “user profile” wasn’t a set of local folders, but was rather a data store on a SAN someplace – or maybe even spread across several places. We have the beginnings of that today with the folder redirection stuff, right? The APIs take care of finding the profile’s physical location, so that apps just ask for a “Documents” folder and get what they want with no worry.

Think about it: you might have some global application directory where users could fire up the apps they needed. Office applications run from a set of servers in your datacenter, while other apps might run in a hosted provider. They’re all real apps, with their UI skimmed off and sent to the user’s computer. This is massively different from VDI, because each user’s activity could be spread across a huge number of physical process hosts; it’s better than RDS, because each process would be more self-contained. You wouldn’t be constrained to a single “OS,” because a single user could run apps from many different “personalities” side by side.

And There Endeth the Client

And at that point, the “client” is pretty much just a screen, keyboard, mouse, and so forth. It doesn’t necessarily run an “OS.” It’s thicker than today’s thin clients, but not as thick as today’s client operating systems. The client needs to know how to get a list of applications, how to authenticate the user, and a few other basic tasks. From Microsoft’s perspective, it might as well be iOS as Windows – wouldn’t matter, because neither client OS would really be contributing much in the way of functionality. The personalities would still be important, but only admins and developers would worry about those. “Bring Your Own Device” suddenly becomes a lot less scary.

And what about disconnected clients, like a laptop on an airplane? Well, with a capable enough device – think “laptop” instead of “smartphone” – you’d simply run the small, low-level “library OS” on the client. You’d migrate the user’s processes to that host, perhaps migrating over some profile data to local storage. Logically, not that different from what some VDI schemes do today, except that, again, you’re moving processes. The underlying Drawbridge OS provides security and control (that’s an inherent part of it’s design, if you read up on it), but you’re not running anything like the full Windows OS of today. And you could still run multiple different personalities side by side. This is so similar to a mashup of today’s VDI and stuff like App-V – but without all the caveats those imposed, if it’s done right (and all evidence in the past couple of years is that Drawbridge is being done right).

Microsoft’s focus would doubtless shift over to “use our personalities right? That way they maintain their Windows lock-in? Well, maybe, maybe not. Personality API stacks could become less important. As a developer, you’d simply pick the one you were most familiar with and that best suited the task at hand. I could see a proliferation of task-specific personalities, each smaller and more specialized than today’s more general-purpose client operating systems. One “personality” might simply be the equivalent of a Web browser, for example, capable of running HTML+JS applications. To the user, apps could all look the same. It wouldn’t be like starting up a Linux VM, and a Windows VM, and a Mac OS X VM – the apps would all run next to each other, and likely have better interaction.

Imagine every decently equipped computer being able to run every client operating system ever built, side-by-side, without the overhead that today’s hypervisors impose. When you can have all the clients, and their applications can interact (through the low-level OS) as needed, then which client you choose doesn’t matter. You choose “all.”

But Years Away…

I think it’s a fascinating possibility. It’s obviously years away, and I’m doubtful it’d look exactly like this. But it’s an interesting strategy, isn’t it? If they went through with it, Microsoft could focus on controlling the back end – an area where they’ve obviously been making massive investments in the past few years. The “desktop” goes away, and the client just becomes a delivery mechanism. It’s thin-client computing all over again, but in a way that could actually make sense. The “Windows vs OS X vs Linux” argument would become kind of meaningless, because you could have it all.

Administrators wouldn’t have to worry about desktop management. There wouldn’t be anything to manage. That’s very much unlike today’s VDI approach, which simply moves the desktop without markedly changing how it’s managed. Many of today’s apps would run unaltered, provided the underlying personality API stack looked the same to the app. That’s what MS Research has done with Drawbridge, in fact – refactored enough Win32 APIs to get Office running on it.

…or Closer Than You Think

In fact, Drawbridge might be closer. In early 2013, Microsoft said they were moving ahead with implementing Drawbridge on Azure. Now that makes a buttload of sense. Remember, when Azure first launched, you were meant to run web sites on it, not virtual machines. Problem is, bigger implementations needed the “full control over the machine” that a VM offers and that a mere web site lacks. But if Drawbridge was a base OS in Azure… wow.

You could run any application as a process. Sure, MS might need to provide the personalities. Windows would be straightforward and obvious; they could choose to do Linux personalities if they wanted. They’d get much better application density per host without the overhead of emulating hardware and multiple low-level OSs, meaning they could have more competitive pricing than VM-hosting services. That’d lead to on-premises Drawbridge, giving you the ability to migrate individual processes from your datacenter to the cloud.

Let’s Do Some Comparison

I feel compelled to contrast the Drawbridge approach with VDI and RDS, because both of those have been, at one time, strong contenders in the “thin client” space… contenders that have, so far, seen only fairly limited implementations.

Let’s tackle VDI first, because it’s never, ever, ever been “thin client” or “no client.” You still have a completely thick client, just running someplace else. You also have to have a thin client to receive the UI. That thick client is the thickest possible client, in fact – it’s not only a full client OS and applications, but also emulated client hardware. VDI doesn’t even do a fantastic job of getting applications “on every device,” because you’re just remoting into a VM. If you’ve ever tried to use Windows 7 via RDP on an iPad, you know how non-compelling it is. I know VDI has some value in certain scenarios where relocating the hardware is really what you’re after – college labs, kiosks, and so forth come to mind – but it was never a play to minimize the client.

With RDS, you’re still running a full, thick OS+personality. You get minimal sandboxing between applications, something that’s vexed Terminal Services admins for years. Some apps simply won’t run. A significant problem with RDS is that you were only skimming the GUI off an entire OS session. It took successive generations to get USB redirection and other things to work well. Of course, what Microsoft learned making all that happen will benefit Drawbridge: all of that redirection can be implemented in Drawbridge, at the low-level OS that’s actually touching the hardware. In other words, when a process says, “hey, I need to get to the USB port,” you don’t have to hack the OS to redirect that connection. The OS is the layer that was going to pass the data off to the hardware USB; there’s no reason it can’t simply direct that traffic elsewhere. In fact, servers could potentially need a lot less hardware, since processes could seamlessly use the hardware of the client machine that initiated the process in the first place. “Redirection of hardware signaling” at the super-low-level OS layer would be a core part of this.

A Bit Deeper

Drawbridge is philosophically a lot like a traditional VM. After all, the hypervisor intercepts hardware calls inside the VM and mediates that traffic to the physical hardware. That’s often done through synthetic device drivers since it all has to happen inside the faked-out VM. Drawbridge proposes the abstraction at the thread (process) level, rather than at the virtual CPU level. It utilizes I/O steams, not virtual device drivers. In other words, when an application says, “display this on the monitor,” that call falls through whatever API stack the application is written on (say, .NET) and eventually falls through to Drawbridge. Instead of drawing pixels on the screen via a device driver, Drawbridge… sends the traffic elsewhere, to a client machine that can draw it instead.

Microsoft is currently saying Drawbridge has fewer than 50 downfalls and around 3 up calls, meaning its API consists of around 50 total things. That’s low level indeed. It’s actually close to the number of API calls in a BIOS, which is what operating systems are already used to running on… which is probably why it only took a couple of years to get a partially-refactored Win7 running on it.

There’s a great video if you’d like to learn a bit more. Here’s a great quote:

While Drawbridge can run many possible library OSes, a key contribution of Drawbridge is a version of Windows that has been enlightened to run within a single Drawbridge picoprocess. The Drawbridge Windows library OS consists of a user-mode NT kernel–informally referred to as NTUM–which runs within the picoprocess. NTUM provides the same NT API as the traditional NT kernel that runs on bare hardware and in hardware VMs, but is much smaller as it uses the higher-level abstractions exposed by the Drawbridge ABI. In addition to NTUM, Drawbridge includes a version of the Win32 subsystem that runs as a user-mode library within the picoprocess.

Upon the base services of NTUM and the user-mode Win32 subsystem, Drawbridge can run many of the DLLs and services from the hardware-based versions of Windows. As a result, the Drawbridge prototype can run large classes of Windows desktop and server applications with no modifications to the applications.

In other words, it’s still the Windows kernel, without the bits that talk to hardware, because Drawbridge handles that. So it’s smaller. But it runs Windows apps without modification, because it’s still Windows.

Think of the Compatibility… and How it Kills the Client

Remember XP Mode in Windows 7? This basically gives any machine the capability to run any OS (t=hat has been refactored; Microsoft could certainly go back and do older versions of Windows. After all, there’s only about 50 API calls they have to seek out and modify, right? So again, the client stops to matter. Have an app that needs XP? Fine. You’re not really “running XP;” you’re running a process that calls on some code from XP. Not quite the same thing. You could also run, in parallel, Windows 7, Windows Vista, and Windows 8 apps. And Linux apps, potentially. All without needing to spin up resource-intensive VMs.

So you start to get to a world where Microsoft has a lot less incentive to continually revise the client OS… and where such revisions are more the form of a new .NET Framework version, not an entire OS. With core services provided by a smaller, more lightweight “bottom layer OS,” the upper-level APIs become less fragile. Drawbridge itself would likely need fewer updates than a full OS, too – after all, how often do you have to flash the BIOS on your servers? More than zero, but less than every Tuesday, I bet.


Again, I’ve indulged in some oversimplifications, and offered some projections based on Microsoft’s direction rather than their results to date. For example, Drawbridge itself isn’t currently a standalone OS. Rather, it’s been implemented to run – experimentally – on Windows and on Barrelfish, a new from-scratch OS being created by Microsoft Research. That’s partially because Drawbridge is, as yet, very new; you can certainly see how factoring some of what we currently call “OS” into it would make it the OS, with the “library OSs” (what I’ve been calling “personalities”) atop it. Today, Microsoft described Drawbridge as a “pico process,” meaning it’s built from a traditional OS process (like one running on Windows or Barrelfish or something else), but with the “traditional OS services removed,” meaning Drawbridge provides those services instead. The technical details get a bit esoteric at some point; for this article I was aiming more for high-level vision.

Oh, and the Hardware

Speaking of Barrelfish, and of Drawbridge, we might take a moment to think about what’s happening to hardware to make both of those things so compelling. Intel has been hard at work developing new busses and technologies that can more or less separate all of our traditional resource elements: CPU, RAM, network, and storage.

We’ve always had storage separated, because it’s always used an independent bus. From short-run copper SCSI arrays it really hasn’t been a massive leap to independent SANs reached by running the SCSI protocol over Ethernet – that is, iSCSI. Now, storage is a “black box,” and you add to is as needed.

Intel wants to do the same for processors and memory, two resources that have always been more tightly coupled. Busses like Light Peak will eventually offer enough bandwidth that processors and memory can be more physically disconnected. Slap a controller between them – a la the front-end controller of a SAN – and you can have giant pools of CPUs talking to a giant black box of RAM. Dynamically assigning RAM from CPU to CPU becomes easy. New incoming tasks are passed to a front-end controller, which selects a CPU with some free time. Migrating processes becomes instant, because you basically just assign a process’ memory to another CPU. You don’t move anything per se; you just have another CPU start “working” that section of the memory farm.

Technologies like Barrelfish are designed to deal with that level of scalability in more novel ways than past operating systems. Approaches like Drawbridge, which start to isolate processes rather than entire VMs, make for more granular workload assignment. Instead of needing a CPU that can run an entire VM just to boot your copy of Minesweeper, the system need only find one capable of running that process and its library OS.

Fascinating, Ain’t It?

I just think all this stuff is incredibly cool. You know, for a long time, IT has felt a little boring. The technical details have gotten more… detailed… but we haven’t had a revolution in a long time. We’re nosing up to the edge of a revolution.

The cloud wasn’t really a revolution, it was an acknowledgement of everywhere-connectivity and of purchasing scale. We put some nice management layers on it, but it’s not, for me, a revolution. Whittling things down from the VM to a process, and then building out processing farms consisting of racks of CPUs connected to racks of RAM connected to racks of SSD drives… that’s getting sexy. It’s new, it’s different, and I can’t wait to see where it goes.