The WannaCry ransomware worm has now spread to 150 nations and has infected hundreds of thousands of computers in a matter of days. Healthcare systems have been heavily targeted, with England’s National Health System (NHS) among the many notable victims. This outbreak is a grave threat to digital healthcare, and a wake-up call for app developers. As we assess the damages and shore up our defenses, it’s time to explore what lessons can be learned from this, the largest such outbreak in tech history, so far.
What We Know about the WannaCry Outbreak
According the US-CERT Alert (TA17–132A) updated on May 16, 2017:
“The latest version of this ransomware variant, known as WannaCry, WCry, or Wanna Decryptor, was discovered the morning of May 12, 2017, by an independent security researcher and has spread rapidly over several hours, with initial reports beginning around 4:00 AM EDT, May 12, 2017. Open-source reporting indicates a requested ransom of .1781 bitcoins, roughly $300 U.S.”.
Initial reports indicate the hackers behind the WannaCry campaign are gaining access to enterprise servers through the exploitation of a critical Windows Server Message Block (SMB) vulnerability. A key aspect of this malware is its ability to spread from device to device inside a network, once a single machine is infected. In response, Microsoft released a security update for the MS17–010 vulnerability on March 14, 2017. Additionally, Microsoft also released patches for Windows XP, Windows 8, and Windows Server 2003 operating systems on May 13, 2017, an unusual move for Microsoft as the patches included operating systems that are no longer officially supported. Media are reporting that the primary infection vector is via phishing emails.
The NSA Has Been Implicated
It is now believed that the code for this outbreak originated with the NSA, and was part of a leaked batch of top-secret data stolen from the NSA. According to insiders and media accounts, the spy agency has been hoarding zero-day flaws that it discovers and hiding them from the manufacturers who could fix them, in order to use the zero-day flaws for U.S. political and strategic advantage. But the NSA didn’t create the WannaCry code, not exactly. Rather, the group behind the outbreak repurposed a stolen NSA program codenamed ETERNALBLUE, mixed it with other code to create worm-like propagation, and created the outbreak we now know as WannaCry.
The Outbreak is Not Over Yet
WannaCry’s creators also apparently included a master “kill switch” for the software, which continuously scanned for the presence of a specific domain and website, which was never registered and deployed on the Internet. A security researcher accidentally discovered this critical domain name in the malware’s code and registered the domain. When infected systems in the wild began detecting the domain as registered, the kill switch was activated and the spread of the infection slowed to a crawl. Unfortunately, now that the media have been reporting this fact, new versions of WannaCry, without the kill switch, are being seen in the wild.
Why Have So Many Systems Suddenly Been Infected?
The answer to this question is both technical and behavioral. On a technical level, we have the sheer sophistication of the ETERNALBLUE code and its extreme effectiveness at exploiting the SMBv1.0 vulnerability quietly residing in so many Microsoft OS products. SMB is a network file sharing protocol, and also manages network functions like file, directory, and share access authentication, as well as file and record locking. Add to that the huge installed base of Windows machines running outdated, and in many cases, unsupported operating systems, and the potential for an enormous outbreak was already extant.
The second major reason behind WannaCry’s lightning-fast propagation stems from human nature. No matter how many times computer gurus harp on the urgent need for data and system backups, many organizations still do not regularly and effectively back up their data. No matter how diligent security frameworks like those from HITRUST, SANS, OWASP, NIST, or CIS push IT admins to ensure that systems and software are fully and timely patched, patching is still a hit-or miss affair. And no matter how many breaches and IT disasters are reported in the media, CEOs and Boards of Directors still resist prioritizing — and funding — data and network protection to needed levels.
A College Professor and a Behavioral Secret
This is a true story: Years ago, a professor of Logic, we’ll call him Dr. C, at an Eastern U.S. University gave a weekly quiz to the students in his class, always on a Friday. Each Wednesday before, Dr. C would post all the answers to the upcoming quiz on a blackboard in his classroom, and leave them there for a few hours, telling the class that these were, in fact, the actual answers to the upcoming quiz. Some thought Dr. C had to be a fool, and expected the entire class to achieve straight-A’s the entire term. In fact, his students’ quiz scores matched the same bell curve of A’s, B’s, C’s, D’s and F’s as in any other class where no answers were posted in advance, though the bell curve was skewed slightly toward higher scores. Although the answers were right in front of them every time, in advance, students in Dr. C’s classes apparently couldn’t be bothered to take ample notes and act on the information made available.
What Developers Can Learn from This
Developers have nearly all heard guidance from IT experts and organizations telling them to patch regularly, keep software up-to-date, install spam filters and antivirus systems, use “least privilege” permissions, tighten access controls, use secure coding approaches, and most of all: make and test regular backups of critical data. But survey after survey, in virtually every industry, shows that such basic steps are still not being widely and uniformly implemented by a huge segment of IT admins.
The same applies even more to IT best practices, such as segregating networks and functions, limiting unnecessary lateral communications, hardening network devices, securing access to infrastructure devices, performing out-of-band network management, validating the integrity of hardware and software, and using 2-factor authentication.
Whether you’re a developer writing code, the owner of a digital health startup trying to hit it big, or the CEO of a major company, the lesson here is simple. Some people and companies simply will do everything ethically possible to succeed and rise above the rest, no matter the effort required. Others simply cannot be bothered.
Developers, which class are you in?