As the news media continues to report on the meltdown of all global tech (sigh), there’s one takeaway for you, a professional IT person, as a postmortem. And it’s a simple question:
How would I feel if that had happened to my organization, and we’d had some critical files encrypted and ransomed?
Keep in mind that WannaCry went after attached drives as well as local ones, which means a single compromised desktop or laptop could potentially have ransomed huge numbers of files located on file shares.
So… how would you feel? Score yourself against these archetypical reactions:
- “It only got few files. We sent a shutdown order to the affected machine and quarantined it at the network level. We already restored the files.” Awesome. You don’t deserve to keep your job. You deserve a better one. And your organization deserves you. Bravo.
- “Meh. I mean, restoring the files from backup is a PITA, but it’d only take a few hours.” Congratulations. You should keep your job, and your organization should stay in business.. But y’all didn’t prevent or mitigate the attack, so we’re holding off on recommending a promotion. You can do better!
- “I feel pretty bad. We still have some files we can’t recover from backup, and an unknown number of machines may still have the malware. Thank goodness for the kill switch.” Well, losing your job is probably harsh, but your organization needs to do a lot better. Start being an agitator for change in your organization. Use this as a wake-up call for them – because next time might be worse, and if it costs enough it’ll result in some personnel damage.
- “We still have ransomed machines and we’ve probably lost some of our data for good.” You might look at careers outside the IT industry. The damage here was entirely preventable. And if you knew this was coming, but your organization just didn’t listen – why do you still work there?
This was a relatively simple piece of malware, as these things go. Its damage was entirely limited to locking you out of your data – a condition that could have come about through hardware failure just as easily. Any organization who was seriously affected by this in any kind of long-term way isn’t doing IT. It’s hardly shocking that the UK’s National Health Service has become the poster child for this attack; my experience in healthcare IT is a bunch of PhDs running around telling the IT people how to (incorrectly) do their jobs, and not listening to technical advice because they’re the ones with a dozen advanced degrees. I was in healthcare IT for a hot minute, and decided I didn’t loathe myself enough to stick around.
I know you know the following, but here’s the spectrum of things that should have been in place, starting from the most obvious up to the most effective:
- Backups. I mean, for holy heck’s sake, right? Have we not been going on about backups for decades now? Just… c’mon, man. If your organization isn’t doing backups correctly, you need to think about your job options. Stat.
- Patches. Just OMFG. I can’t even. “Oh, but we weren’t sure if a patch would break our crazy 15-year-old line-of-business application.” Hope that’s working out well for you. If this is your organization, you need to ask yourself what you’ve done wrong in life to deserve this.
- Upgrades. Yup, Windows 10 was invulnerable. That upgrade is sure looking inexpensive in retrospect, right? I hope someone, somewhere compares the cost of the upgrade to the damages they’ve suffered or ransoms they’ve paid.
- Firewalls. I don’t know how often everyone has to be told that most attacks come from within. This malware was a perfect example: sure, it originated outside the network, but once inside – a single compromise easily achieved through social engineering – it spread and did its damage almost unrestricted, because nobody firewalls off the internal network. There’s little reason most client computers should be talking to each other via SMB, so why allow it? Heck, there’s little reason most client computers should be talking to each other at all. Why allow it?
- IDS. Intrusion Detection Systems can monitor for patterns of suspicious behavior, like a single client machine suddenly hieing off and modifying thousands of files at once. At the very least, they can alert you; at best, they can trigger a quarantine of the machine.
I tell you why this post is so laden with vitriol. This ransomeware situation – it’s angering. It’s frustrating. This – like so many malware attacks recently – was entire preventable, and utterly subject to mitigation. All we, as an industry, had to be doing were the things we know we should be doing. And I know that most people reading this agree – it’s your bosses that you probably argue with about all this.
You know how this ends? Heaven help me, with an organization like the European Union issuing regulations. We’ll have an ISO/IEC standard that we all have to follow, coupled with mandatory inspections and verifications. Some “too big to fail” industry (NHS?) will fail, and the governments will start stepping in to “ensure” it doesn’t happen again. We’re already seeing the toe-in-the-water start of this with NIST’s Cybersecurity standards. It’ll make SOX and GLB and HIPAA look like a cakewalk. But, I suppose when organizations and IT leaders worldwide so blatantly ignore common sense, the advice of their own expert employees, and the recommendations of every vendor, everywhere, then government intervention is probably the last step.
I’m curious – if you had been hit with this software, what in your organization would have stopped it (noting that most anti-malware engines didn’t detect it)? Anything (“we run Windows Update” is a perfectly acceptable answer) at all? How could you have recovered (“we have backups, duh”)? Drop a comment.
And… how frustrated are you that some organizations continue to take zero steps to ensure their own safety? “Nobody would want to hack us” is no longer a thing – this most recent exploit wasn’t targeted, it was just broadcast. Thank Heaven for the kill switch in the code – although the next variant will omit that, you can be sure.