Season 3 / Episode 52
Can a malware be *too* successful? This is the story of Conficker, one of the most advanced worms in history - and how its success led to its ultimate failure.
- Episode 22
- Episode 23
- Episode 24
- Episode 25
- Episode 26
- Episode 27
- Episode 28
- Episode 29
- Episode 30
- Episode 31
- Episode 32
- Episode 33
- Episode 34
- Episode 35
- Episode 36
- Episode 37
- Episode 38
- Episode 39
- Episode 40
- Episode 41
- Episode 42
- Episode 43
- Episode 44
- Episode 45
- Episode 46
- Episode 47
- Episode 48
- Episode 49
- Episode 50
- Episode 51
- Episode 52
- Episode 53
- Episode 54
- Episode 55
- Episode 56
- Episode 57
- Episode 58
- Episode 59
- Episode 60
- Episode 61
- Episode 62
- Episode 63
Born in Israel in 1975, Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.
In 2007, created the popular Israeli podcast, Making History, with over 12 million downloads as of Oct. 2018.
Author of 3 books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.
Yonatan Striem Amit
CTO, Co-Founder at Cybereason
A manager, a developer, a hacker, and a project manager. Enjoys the challenge of writing complex, elegant solutions, both for the low-level system parts as well as algorithmic intensive systems. Specialties: Windows & Linux Driver Development, Windows Internals, x86 & x64 Reverse Engineering, Network and Application Security, Embedded Development, Machine Learning, Computer Vision
On August 20th, 2008, a Trojan was found running on a server located in South Korea. It was named “Gimmiv.A”. Gimmiv was a RAT – a remote access trojan, which allowed its author control over the infected machine. RATs are rather popular in the malware world, of course – but Gimmiv was noteworthy for one reason: it exploited a zero-day vulnerability in Windows operating systems. Specifically, it targeted a vulnerability in a component called RPC-DCOM, which allow programs running on two machines to communicate with one another. RPC-DCOM was implemented on every PC running any version of Windows XP or earlier. In 2008, this translated to around 800 million machines in total.
And so when experts spotted Gimmiv on that South Korean server, it looked to be a ticking time bomb: a test run, of sorts. And the test was successful. In September 2008, Chinese malware distributors began selling toolkits, for just 37 dollars a pop, that could help hackers exploit the newest RPC-DCOM vulnerability discovered by Gimmiv. Word of the toolkit spread, and demand among hackers worldwide grew so large that, by October 26th, its Chinese distributors were made to give up the exploit code, for free. Suddenly every hacker everywhere had a free template for building a malicious program, to break one of the most powerful and popular Windows features.
In the face of such a threat, Microsoft’s security team made a big decision: the following month, on October 23rd, 2008, they published an emergency patch for the RPC-DCOM exploit.
In theory, a patch for a critical vulnerability is a good thing. I mean, why shouldn’t it be? If your roof is leaking, patching the hole is probably a wise move. But in practice, software patches often end up making a security problem worse before making it better. Why? Because when a patch is published, the bad guys can reverse engineer it to figure out what was the original vulnerability, and create new malwares that exploits it.
And so when Microsoft published the RPC-DCOM patch, it was entrusting users to proactively protect their systems. In reality, they’d begun a race: between Windows computer owners around the world, and the hackers who now had a blueprint for how to take advantage of them.
And here’s the part of the story where, typically, I’d build up the suspense. I’d say something like: “The race was on: good guys versus bad guys. Will Windows users act on Microsoft’s critical warning, or will the hackers beat them to it?” The music would build with tension, and crescendo at the moment of climax, and then fade out with a nice resolution.
But I can’t do that this time. Because, well, come on! You know what’s going to happen here. There’s no suspense to it. Hundreds of millions of people are not all going to update their operating system software in a timely manner. Microsoft had to know that. Plenty of individuals, companies and governments were late to the patch, and a full 30 percent of worldwide Windows computers hadn’t been updated even months after the fact.
So, in the end, the RPC-DCOM patch achieved two things: helping active users protect their computers against any incoming RPC-DCOM malware, and providing a blueprint to hackers on how to take advantage of the rest. Some security experts could see the train coming. But nobody could have predicted just how massive the impact was about to be.
Christmas week, 2008. Inside the operating room of a hospital in Sheffield, England, a surgery is taking place when, suddenly and without warning, computers shut down. January 9th, 2009. In Manchester, England, traffic fines databases go dark, and citizens’ claims backlog at town hall. Police officers across the city are pulling over reckless drivers, breaking up fights, and scoping out suspected criminals, finding themselves unable to run basic background checks in their systems. January 15th. Aboard a French Navy air station, all Dassault Rafale fighter planes are downed. An interruption has been detected, and the planes are unable to download their flight plans. Officers must resort to telephone, fax and mail for all communications, as they sort out what’s going on.
In the first few weeks of 2009, computer systems belonging to governments, militarizes, critical industry, corporations and individuals worldwide all begin to flare up in one, massive wave of cyber chaos. Hospital networks in the U.S. and Scotland, healthcare networks in Britain and New Zealand, corporations in India, universities in Utah and Germany, the German army, the British Ministry of Defence. MPs’ computers slow down at London’s House of Parliament, and three-quarters of all Royal Navy ships are infected. As CBS’ 60 Minutes prepares a story on the cyber outbreak, their own computers get hit. And it isn’t just PCs that are affected. It’s IoT devices and police body cameras, CAT scanners, MRI machines, and dialysis pumps.
This is Conficker. It will go on to breach over 10 million computers in almost 200 countries, spanning from Iceland to Madagascar, the southernmost tip of Argentina, Yemen, Siberia, every U.S. state, every ocean. Even the most prescient security experts, even the most ambitious hackers, could not quite have predicted this.
An Elegant Worm
The computer worm first named Downadup, later known as Conficker.A, was powerful and elegant in its design. It ran only a few hundred lines of code long–the entire file 35 kilobytes in size. And yet, within only these 35 kilobytes, it worked like a swiss army knife. We’d refer to Conficker as a “blended threat”, for how it combines the features of many different types of malware into one package. The first of these features was its ability to exploit the vulnerability of RPC-DCOM, first detected in Gimmiv.A.
Once inside the computer, Conficker downloads a full copy of itself from the infected machine whence it came. It then copies itself to the computer’s “System” folder and deletes any user-created system restore points, in order to prevent you from resetting your computer to a pre-Conficker state. Then it checks the IP address of the machine, and sets up an HTTP server on a random port in order to open a connection with its owner, and its next victim.
And these are just some of Conficker’s many features. It also modifies system registries in order to better hide itself. Best of all: once downloaded to a machine, Conficker patches the RPC-DCOM vulnerability that it exploited to enter. Presumably, its author knew that other hackers would try to write similar worms using the Chinese toolkit, and didn’t want any competition.
On November 20th, 2008, Conficker.A began to spread over the internet. Instead of the couple hundred or so computers Gimmiv breached, Conficker–with its superior spreading ability–took over half a million computers in just over a week’s time. After a month, 1.5 million. At each one a little operative reporting back to base at specified intervals, just like Gimmiv before it, ready to take whatever command its owner so wished.
The Conficker Cabal
This was a botnet unlike anything ever seen before. An army, ready and awaiting orders. Nobody yet knew who commanded this army. Even worse: nobody knew what that commander planned to use it for. Money? A larger DDoS attack? Something worse?
Drastic measures had to be taken. Microsoft issued a critical security notice two days after the outbreak, but it was already too late for many. They posted a $250,000 award for information that would lead to the arrest of the malware’s author.
With infections reaching into the millions, cyber security experts worldwide came to a consensus: that to fight an army, you need an army. On February 12th, 2009, Microsoft announced a makeshift international cyber defense organization, called the Conficker Working Group. They took on a nickname: the Conficker Cabal. Representatives from Microsoft, ICANN, Symantec, F-Secure, China’s Ministry of Information, Neustar, AOL, and a host of other corporations and research teams banded together towards the singular goal of defeating the Conficker botnet. This was the Avengers of cybersecurity.
Yonatan Striem-Amit, CTO and co-founder of Cybereason, has been a part of the cyber security community since the late 1990’s as a manager, a developer, and a hacker. He remembers this important announcement well.
“[Yonatan] Microsoft, they took a very leading role in that attack, everything from the patch itself through researching and deciding what was going on there. Eventually there was a collaboration from internet registrars all the way through security companies and the network and endpoint side trying to get an understanding of how the malwre worked, how it was updating and how to offer their users mitigation.”
Defeating The Worm
To defeat the botnet, cyber security experts knew they’d have to cordon it off from its command-and-control apparatus. Conficker disconnected would be about as effective as a secretary without a phone number. So they set out to find the identifying information associated with the controller it called back to, and shut them down.
“[Yonatan] So one of the tasks that any malware author needs to make sure they have is being able to control their malware past the initial creation. A very common way to do that – and this is what the first variance of Conficker did was having them reach out to the internet and get updates to their codes, so they can be continuously updated as the author wants them to.
Classically malware authors would use some – a small set of domains in which they control to set up what’s known as command-and-control servers. This creates a vulnerability for the malware authors in which if somebody discovers this is the set of IPs – sorry. This is the set of domains used by the malware, they could go on their firewall and say just block these IPs. Effectively render in the malware, you know, incommunicable. You can no longer communicate it up to that malware.”
But whoever was behind Conficker knew what they were doing. The researchers downloaded the worm to a friendly machine, to play around with and examine it, and quickly discovered a problem.
Conficker, unlike many other worms of its kind, didn’t call back to a single point of origin. Instead, Conficker would generate a list of 250 seemingly random domain names every day, of which only one would be properly registered by its command center. There was no way of knowing which domain was the right one, before going down the list one by one. To give an analogy: imagine being a pizza delivery person. For each new order, you’re given a list of 250 random home addresses, of which only one is the correct destination.
For a computer program that can run thousands of operations within one second, this kind of task is easy work. For humans, sifting through potentially hundreds of domains, every day, is some kind of torture. Like biking from house to house, holding an oily pizza box that’s long since gone cold.
“[Yonatan] I will give you an example. You could set like the particular day in which the malware is running. Let’s say today’s date. Use that to initialize a function that generates what would appear to be a random set of characters and this could be for example ABQQR7 and then I as the malware author would simply go ahead three, four days in advance and create my domain and register my domain there. I paid them whatever it cost for a new domain and now I have this daily command and control asset that nobody could predict in advance.
The reason to take multiple TLDs has to do with how the internet can come together and organize take-downs. So for example, when the malware author had 10 TLDs, basically 10 different agencies managing this domain registration, in order to arrange a takedown, you need to bring all of them working together in the same time. That of course creates a much more complex process as the defenders are trying to block these – at the register level, block this registration ahead of time.”
In order to generate new domain names once a day, the worm needs to be able to follow a schedule. That means it’ll need to keep track of the date and time.
In the sandbox environment they’d set up, the Conficker Cabal researchers went about tinkering with the worm: setting their computer clock forwards one day, in order to see what domains it would seek. Because the domain finding function was not random, but pseudo-random, tricking the worm into thinking it was tomorrow would allow them to see, in fact, what domains it would come up with the next day. So if they could predict what domain names were to come, they could maybe, just maybe, block all of them in advance, and redirect the worm into a trap that would kill it off. Build a piece of code that does this automatically, and suddenly you have an anti-Conficker.
The good guys were about to be one step ahead. Or so they thought. Conficker’s author had anticipated this very vulnerability, specially designing the worm not to read the time from its host computer, but from popular websites like Google and Yahoo.
With tricking the worm out of the question, then, the Cabal got set on reverse-engineering it: to extract the part of the code in charge of the domain name-generating algorithm, and attach it to a new piece of friendly software that would display the information in advance. Before long the researchers had a program that could read out Conficker’s upcoming domain names before they’d manifest in reality. This was the breakthrough they’d been waiting for. 2009 was around the corner, and 1.5 million computers worldwide were infected, but things seemed to be turning. The good team had found a cure.
It’s definitely an impressive way that the industry came together to try to block for everything from the registrar level saying – you know, registering in advance so the malware couldn’t get in our office, couldn’t get their command and control ahead of time all the way through more organized attempts to distribute the patches and control the spread of the malware.
But the victory was short-lived.
It was three days before the new year, and a new and improved Conficker was released into the wild.
‘Conficker.B’ had new means of spreading: through shared network folders–brute forcing login info on password-protected admin accounts in a network–as well as removable drives fit with AutoRun. AutoRun is the feature that allows your computer to run, say, a DVD upon insertion into your machine, immediately, without further direction. Here Conficker could do the same, if you plugged an infected USB or hard drive into a previously uninfected machine. You wouldn’t even know what hit you.
The B variant also had self-defense tools built-in. It disabled antivirus, and Windows updates, and, most importantly, expanded its pseudo-random domain name generation algorithm from using five to using eight top-level web domains. In short, this would make it even more difficult for defenders to trace back the correct one of 250 domains created each day, by further spreading out the problem.
So Conficker.B was better, and more powerful than its predecessor, but it was more than just that. It was also a warning. Previously, it was anybody’s guess who might have created such a worm. It could’ve been a lone hacker out for money, or some teenagers. In 2010 Mark Bowden of The Atlantic magazine spoke with some members of The Conficker Cabal, the most optimistic of whom were speculating that the author might have been a student: somebody who may not have anticipated the consequences of what their worm was now unleashing.
Conficker.B changed all that. It could no longer have been some graduate student, or mid-level hacker behind all this. Why? Not just because Conficker was too well-crafted to be the work of any ordinary hacker. Not just because the hacker had anticipated all of the Cabal’s moves before they even made them. No, the B variant had one particular component which shocked everyone.
Utterly Mind Blowing
I need to back up a bit, to explain.
In one or two past episodes of Malicious Life, we’ve talked about cryptographic hashing: a mathematical method for end-to-end message encryption. In order to prevent anybody from manipulating messages sent between worm and controller, Conficker.A used the Secure Hash Algorithm 1, SHA-1, hash function. SHA-1 allowed both worm and controller to identify that whatever information they downloaded did, in fact, come from its intended sender, and without tampering along the way. Like a digital fingerprint.
As time goes by, and computers get faster, old hash algorithms need to be upgraded to newer, stronger ones. At the time Conficker was released, the worldwide standard for hash algorithms was SHA-2, successor to SHA-1. On October 27th, 2008, in a document running well over 200 pages, the cryptographer Ronald Rivest, along with a dozen or so co-authors, published a paper proposing the successor to SHA-2. It was named MD6. The process of anointing a new standard in cryptographic hashing is dictated by the National Institute of Standards and Technology, NIST for short, and because of how extremely technical the process is, very few people are qualified enough to participate.
That’s why it was so utterly mind blowing to security insiders when Conficker B upgraded from SHA-1, to MD6, only a couple of months after Rivest’s paper first published. Yonatan Striem-Amit:
“[Yonatan] More than that, in later versions, when they developed their ability to do peer to peer update, basically any Conficker node updating its peers and new instructions and new code. That has been done in a very cryptographically secure manner using public key certification and of course RSA and MD6 is part of the overall signature. It shows a level of sophistication that is not trivial even to date.”
In fact, at some point the Cabal’s researchers managed to exploit an initial vulnerability in MD6 to tap into the communication between Conficker and its command-and-control center – but again, the hacker was one step ahead of them. When MD6 was updated and fixed on February 2009, Conficker was updated and the vulnerability patched soon after.
This hacker clearly meant business. They were in-the-know. They could only be the most sophisticated kind of criminal, or state entity. And if they’d already mastered MD6, what else did they know?
A Ticking Time Bomb
From the end of 2008 through the first couple of months of 2009, the Cabal had been working tirelessly to identify and shut down all 250 domains the Conficker called every day. It was exhausting: like one big game of Whack-a-mole that never ended. Every random domain would have to be vetted, one by one. Was it registered? How long ago? By a legitimate owner, or not? Many a call was made to internet service providers and registries, often located in faraway, developing countries.
So the job was tough, but at the same time, the good guys looked like they were finally gaining. Conficker had already burrowed into millions of machines, but it was being slowed. 250 domains per day proved frustrating, but not impossible to manage. Things were looking up.
But the Conficker hacker was watching. They must have seen the progress being made–how domain names were gradually being blocked off before they were to be activated. So the Conficker hacker decided enough was enough.
By the end of February 2009, all Conficker-infected computers worldwide were upgraded: to the newest, Conficker.C (sometimes referred to as Conficker B++). Conficker.C added a few new touch-ups, and one, big, ticking time bomb.
Instead of 250 domains per day, spread across eight top-level domains, the upgraded Conficker was capable of generating a pool of 50,000 web addresses, across 110 top-level domains–every top-level domain in the world, actually. This was Whack-a-Mole, with 50,000 holes. With this new upgrade in hand, the worm would be utterly impossible to stop.
And there was an even more sinister twist. Although Conficker.C came built with its upgraded domain-name generator function – it was only set to be activated on one particular date: April Fool’s Day.
It makes you think, doesn’t it? That Conficker will be much harder to stop wasn’t what worried the security researchers. I was already the biggest and most powerful botnet in the world. But who ever controlled it hadn’t used it yet towards any meaningful end. It was just waiting, ready for orders – and it seems likely that these orders will come on April first.
Why play games like this? Who was behind Conficker, and what were they after? All this time they’d been matching the Cabal’s every move. Every time a breakthrough occurred, a new version of the worm squashed it. Why not just build the strongest worm possible from the beginning? Clearly they knew how. And consider the random domain generator: if the author of this worm were capable of generating 50,000, why not build the worm like that from the very beginning? I just told you that identifying and closing up 250 domains every day was a frustrating, annoying task for researchers, but not out of their grasp. Could it be that the Conficker hacker purposely designed an algorithm to be frustrating, but not out of grasp?
Every piece of evidence was more convoluted than the last. Conficker was either the greatest international cyber security threat in history, or one big joke.
The Malicious Payload
On Match 4th, 2009, all infected computers upgraded to a new variant D which came with some fancy new features: It prevented users from running their machines in “safe mode”, and constantly scanned for and shut down any antivirus programs it detected. It also came fitted with new peer-to-peer communications software that allowed for easier spreading over networks, and better command & control.
And then, on April 4th, Conficker updated to its E variant. E was unique. It was, finally, going to cash in on the power of the botnet that had been created, with a proper malicious payload. That payload? A copy of the Waledac spambot, and SpyProtect scareware.
A conventional spam engine, and a program masquerading as an anti-virus that tries to scare the user into paying for fake malware removal. To say that these two are not exactly the kind of programs that threaten world domination would be a massive understatement… I mean, When the world was panicking about the little computer worm in 10 million computers, they had reason: a hacker could do real damage with that kind of power. Instead, when the moment of truth came, it fizzled out. It’s an anti-climax. It’s like watching Iron-Man bravely power through hundreds of enemies, dodging thousands of bullets and missiles, finally coming face to face with Thanos… and than slapping him on his face.
Why? Why go through the trouble of crafting one of the most sophisticated, effective worms in the history of computing, only to distribute low-level, low-return scamware? We asked Yonatan this question.
“[Yonatan] So a lot of time what we’re seeing is – the big question around malware authors is, “How do I monetize them or how do I get my agenda across?” So if you’re a nation state, your agenda could be political. If your agenda is espionage, you go there to steal data. But if your agenda is how do I siphon money, it’s a big question that a lot of malware authors and ourselves – how do I translate my access into monetary gain?
In the past couple of years, the answer to this was either cryptominers and before that ransomware as a way to monetize attacks. But even before these two, the way criminals monetize their access really boiled down into spam and selling you fake – whether it’s fake prescription medicine and fake AVs to sell you off with a product to secure your computer.
This has been a very useful and effective way for criminals to monetize their system. What we see here is an example of somebody built a very, very specific infrastructure and then when it was time to monetize it, it did not have a good answer on how do I make money out of that, out of my asset here.”
In other words, it may be that the Conficker hacker hadn’t thought far enough ahead, or anticipated how truly massive their worm turned out to be.
There’s also another option: that by the time Conficker reached the point where the hacker could actually monetize it – it was already too famous, and drew way too much attention to itself. In other words, Conficker was a victim of its own success. As one security commentator put it –
“It was like thieves announcing they were going to rob a bank. Of course the police are going to respond to that sort of thing. It drew too much attention to itself, and that is what ultimately led to its failure, at least in terms of being used as a tool to commit further cybercrime.”
“[Yonatan] Remember that at that point, they were very much under the radar, under the clock. People were already fighting them off. So they had a big issue in essence to how to monetize the malware.”
This might be the reason why Conficker.E was also the last version of the worm to be released. It seems that who ever was behind Conficker had enough of the cat-and-mouse games against the Conficker Cabal, and just… gave up. They abandoned their creation and stopped updating and upgrading it.
Which brings us to the next natural question: who was behind Conficker?
To this day, we don’t really authoritatively know what and who were behind Conficker. We do know that the last final payloads started the – the distributing were around both spam and fake AVs. So it’s the usual monetization processes around criminal groups.
There was ever only one line of evidence to trace back to the Conficker hacker. Conficker.A, upon first entry into a machine, would check for a Ukranian keyboard layout. If it found one, it would exit the machine.
The earliest origin of the worm was traced to Argentina. Interestingly enough: Argentina has a not-insignificant Ukranian population. The information gathered by researchers couldn’t go so far as to single out an individual hacker or group of hackers, but it did seem to indicate that the computer came from a Ukranian located within Argentina.
“[Yonatan] People on the internet seem to associate this with the Ukraine groups but I’ve seen no conclusive proof. This can be fake. But generally the thinking is a Ukraine criminal group looking to monetized long term over spam and the fake AVs. Getting users to buy fake AVs.”
Conficker’s Not Dead
But our story – Conficker’s story – does not end here. There is one last fascinating fact, and this is that Conficker never actually went away. To this day, security vendors detect tens of thousands of new Conficker infections each year. This is, of course, nowhere near the millions of machines it infected in the height of its spread – but nevertheless, Conficker is still very much active and is considered a real threat. Luckily, because its owners have long since given up sending out commands, it’s pretty much harmless.
Why is Conficker still alive even after so much time has passed? Because old computers and old software is still around. Today’s Conficker is most often found in developing nations, in old computers, computers with pirated Windows systems, and industrial computers running on old tech. In fact, much more than the Cabal ever could, the real killer of Conficker has been simple patches, and the phasing out of Windows XP and earlier operating systems it ran on. Until all these old computers are finally dead and gone, the ghost of Conficker will keep pestering us, like a sort of Internet-Background-Radiation.
If there’s one good thing that Conficker did, is that it certainly had a positive effect on the cyber security community. As one member of the Conficker Cabal recalled –
“I don’t think that the bad guys could have expected the research community to come together as it did, because it was pretty unprecedented. That was a new thing that happened. I mean, if you would have told me everybody’s going to come together—by everybody, I mean all these guys in this computer-security world that know each other—and they’re going to do this thing, I would have said, ‘You’re crazy.’ I don’t think the bad guys could have expected that.”
In cooperating at a scale never before seen, and through sheer force of will, the Conficker Cabal banded together with companies and governments around the world to halt the spread of Conficker. A member of the Cabal described it to the Washington Post by saying, quote:
“We’re literally relying on people in Latvia to protect computer networks in Brazil, and the other way around, too, so each country has some capability and some responsibility once they understand the role they can play here. No matter what happens with Conficker, it’s created something here….a beautiful opportunity to bring cyber security to the kitchen table.”
“[Yonatan] I think it’s a great example of how the industry can come together as a group to solve a problem. That is really for the greater good of all the users out there. I was very impressed by the way and the rate in which the industry was able to come together in a range of resolutions on one hand. On the other hand, it shows how vulnerable we are. I mean we right now in a sense have three to five, depending on how you count them, major operating systems in the world. Vulnerability in one of them that can become wormable is a big risk.
Our infrastructure is more vulnerable than we think. The bad, sort of the negative side of learning here, and the positive side is really how we as the industry can come together and work to alleviate a big portion of that pain.”