Season 1.5- Stuxnet / Episode 9
Stuxnet was a devastating weapon, but who wielded it? That is the question we try to answer with the final installment of our Stuxnet series. In this episode, we explore other, similar battles of the modern cyber war, and look further into the topic of Zero Day vulnerabilities.
With special guests: Andrew Ginter, and Blake Sobczak.
Born in Israel in 1975, Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.
In 2007, created the popular Israeli podcast, Making History, with over 10 million downloads as of Aug. 2017.
Author of 3 books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.
Andrew Ginter is vice president of industrial security at Waterfall Security Solutions. He has managed the development of commercial products for computer networking, industrial control systems and industrial cyber security for vendors such as Develcon, Agilent Technologies and Industrial Defender.
Blake covers security for EnergyWire in Washington, D.C., focusing on efforts to protect the oil, gas and electricity industries from cyberattacks and other threats. He writes about how hackers have applied their tools and techniques to break into -- and often break -- physical systems. His reporting has taken him to China and Israel for firsthand looks at how global powers approach cyberwarfare.
Hi, and welcome to Malicious Life- I’m Ran Levi.
The last two parts of the episode focused on the technological characteristics of Stuxnet, a computer virus that attacked the uranium enrichment facility in Iran, and was exposed in 2010, almost accidentally, by a small IT company from Belarus. We described the way the malicious software penetrated the facility’s network, located the control system’s computers and finally started messing with the gas centrifuges by increasing and decreasing their rotation speed—all while presenting pre-recorded false data to the technicians and programmers. This level of sophistication made Stuxnet a groundbreaking malware—the first “real” cyber weapon.
Now it is time to talk about the people who created it. In the computer security business, this question is usually considered to be secondary. In most cases even if we do catch the creators of malicious software and punish them, the software itself still continues to spread. It is like a man releasing a lion from its cage; we might be able to punish the man, but the priority is catching the lion before it gets downtown.
In Stuxnet’s case however, identifying the creators of the software isn’t “secondary” at all. On the contrary: if it is possible to prove that it was created by a government agency, then Stuxnet becomes much more than a computer virus—it becomes a cyber-weapon. Reliable information about the cyberwarfare capabilities of nations is scarce, since it’s usually considered top-secret military information. If Stuxnet is indeed a weapon developed by a country, then it might reveal something about this secret world.
A Communication Channel
When IT experts analyzed Stuxnet, they found the software included a communication channel with its operators. When Stuxnet infected a computer, it searched for Internet access. Once online, it sent its current status and other information to servers located at these addresses:
The physical computers storing the servers for these websites were located in Denmark and Indonesia respectively. But before we suspect the Danish or Indonesian governments, it is important to remember that anyone can launch a Web server from anywhere around the world, regardless of his or her physical location. For instance, the website for this podcast is hosted on servers located in the United States, but it could have just as easily been hosted in Europe. Stuxnet’s operators created these servers anonymously, and without any identifying information. So both Web servers led to a dead end when it came to identifying the people behind Stuxnet.
Next, experts started “rummaging” through the code of the malware, hoping to find hidden clues. For example, one of the files referenced in the code was named “guava.pdb” and was stored in a folder called “Myrtle.” In Hebrew, a myrtle is called “Hadas,” and is also the middle name of the biblical Queen Esther, wife of Ahasuerus, king of Persia, which is of course, modern day Iran. In other words, it could be interpreted as a possible connection between Stuxnet and Israel.
But is this clue, and other similar clues discovered in the code, a “smoking gun”? Definitely not. This free association is the tech equivalent of playing a Led Zeppelin album backward, hoping to find satanic messages. Personally, I find it hard to believe that any programmer would bother scattering such vague clues, except maybe as a joke. There is no smoking gun—not even a toy gun! And even if such clues were to be found in the code, we still can’t take them seriously. In fact, even if Stuxnet had played the song “Hava Nagila” while destroying the centrifuges, we still wouldn’t be able to reliably point a finger at Israel, since clues could easily have been planted in the code as a distraction, disguising the malware’s real creators.
Digging into the code, however, did prove to be helpful in other ways. For example, in order to infiltrate and take over a computer, Stuxnet took advantage of an unknown bug in the Windows operating system. This sort of exploitable vulnerability is usually called a ‘Zero-Day’ bug. Zero-Days aren’t commonly known since it takes a tremendous amount of knowledge and time to locate them in software—and it’s their scarcity makes them very valuable. It’s like knowing the code to the vault at a bank. It’s safe to assume that criminal organizations would be willing to pay quite a lot for information that could aid in a robbery. Similarly, whoever knows about the existence of a Zero-Day vulnerability in Windows can sell the knowledge to computerized crime organizations for hundreds of thousands of dollars. But secrets are only valuable as long as they remain secret. Once the vault is breached, the bank will immediately change the password—and knowledge of the former one becomes worthless.
Similarly, knowledge about a vulnerability in software is valuable as long as no one has used it to break into a computer. Once a Zero-Day bug is used in a malicious software, it’s only a matter of time before antivirus vendors expose it and the bug is fixed. When that happens, the Zero-Day becomes worthless. Which is why a normal virus never uses more than one Zero-Day: revealing more than one is a pure waste of money.
But Stuxnet used not one, not two, but four Zero-Day bugs simultaneously! Such an extravagant waste of resources is akin to tossing four heavy bombs on a building—when one would suffice to destroy the target.
As you might have guessed, I did not choose the analogy randomly. Such behavior only makes sense in the military world, where making sure a building is destroyed outweighs any financial considerations. In other words, someone wanted to make sure that Stuxnet would succeed in penetrating the uranium enrichment facility in Nantanz at all costs—perhaps because he or she knew there would only be one opportunity to do so. After they discovered Stuxnet, the Iranians would learn their lesson and be more careful in the future.
Another clue in identifying Stuxnet’s mysterious creators lies in the fact that they had very detailed information regarding the control system of the Iranian facility. They could tell how many centrifuges were installed in the facility and their specifications; they knew which microchips controlled the centrifuges, and what anti-virus software was installed on each computer. Ralph Langner, the IT security expert who analyzed Stuxnet, said this of the intelligence:
“The detailed pin-point manipulations of these sub-controllers indicate a deep physical and functional knowledge of the target environment; whoever provided the required intelligence may as well know the favorite pizza toppings of the local head of engineering.”
Operation “Olympic Games”
Not surprisingly, the enormous resources and top intelligence that Stuxnet’s developers enjoyed indicate two immediate suspects: The United States and Israel. An investigative report in The New York Times claimed the Stuxnet was a result of a joint American-Israeli operation code-named “Olympic Games,” which began during the Bush administration and continued with Barack Obama. According to some reports, the malware was developed by an Israeli Intelligence unit called Unit 8200, and was tested on real centrifuges in the Dimona Nuclear Research Center in the Israeli desert and on centrifuges that the Americans obtained from former Libyan dictator, Muammar Gaddafi, when he gave up his nuclear ambitions.
I assume that some listeners will celebrate the success of Stuxnet, since Iran is not a friendly nation, to say the least—and what’s bad for the enemy is surely good for the U.S. and Israel, right?
Well, not so fast. We previously compared Stuxnet to a guided missile aimed at a specific target. While being a valid analogy, it is not a perfect one. Remember that the way the malware “moves” within the virtual space of the Internet is nothing like the direct path a missile takes to its target. Instead, Stuxnet spreads much like as viral epidemic—skipping from one computer to another and infecting them. According to some assessments, Stuxnet infected about a hundred thousand computers. Most of these computers were not related to the Iranian nuclear program, and many were not even physically located in Iran. Some of those computers crashed—which is how Stuxnet was discovered in the first place. But even if Stuxnet didn’t affect the computer’s operation, it still caused damage. When dealing with sensitive control systems, no one can afford to let “foreign” and unfamiliar software—let alone a known malicious one—wander around a system unchecked. As Langner explains:
“The worm infected hundreds of thousands of other machines, and caused almost no damage. Now, it did cost a lot of money! Because if I have an infected machine staring me in the face, am I going to believe that that’s OK and leave the worm on there? No, I’m gonna clean it out. The process of shutting it down and cleaning it up costs enormously. If I have to shut down a refinery, because some the equipment has been infected, I’m losing millions of dollars a day. “
In other words, the Iranians were not the only ones potentially harmed by Stuxnet. Even an American petrochemical facility—completely unrelated to the Iranian nuclear program—could have suffered great financial loss as a result of Stuxnet taking an afternoon stroll through its network. Interestingly, the potential for secondary damage never stopped Stuxnet’s anonymous developers from carrying on with their plans.
In early September of 2011, a year after Stuxnet was exposed, a new, unfamiliar, malicious software was discovered in Hungary. The researchers who discovered it named it Duqu, since the letters D and Q appear in some of its files. IT experts from the Budapest University of Technology and Economics analyzed Duqu and identified many similarities to Stuxnet. In fact, there were so many similarities, that it later turned out Duqu was actually discovered even prior to September 2011 by a Finnish antivirus company—but its automated identification system mistook the malware for a copy of Stuxnet. The two types of malicious software were like two sisters sharing identical DNA segments. They shared so many common pieces of code that experts believe that the people who developed Duqu are the same people who developed Stuxnet, or at least share common ties.
The two types of malware, however, have different objectives in the cyber offensive. While Stuxnet was used to carry out attacks on industrial systems, Duqu was an espionage tool. It did not attack systems directly. Instead, it tried to steal information such as sensitive documents and contact lists. It could also record the activity taking place on a computer. It recorded keyboard keystrokes, took screenshots and even recorded conversations using the computer’s microphone. The captured information was sent through the Internet to a few dozen servers located in different countries around the world—from India and Vietnam to Germany and England.
As with Stuxnet, these servers were paid for anonymously. But researchers were still eager to get their hands on them, since any information they held might reveal new clues. As is often the case with modern websites, Duqu Web servers were hosted on hardware rented from dedicated hosting companies. Security researchers from the antivirus company Kaspersky Lab approached the hosting companies who owned Duqu’s server and asked for their cooperation with the investigation. It was a race against time. The researchers knew that as soon as Duqu’s operators realized that the malware had been discovered, they would try to destroy as much incriminating evidence as they could. Unfortunately for Kaspersky Lab, the bureaucratic process for obtaining the approvals was agonizingly slow, and on Oct. 20, 2011, all the data on the servers was remotely destroyed before Kaspersky could get its hands on the it. Duqu’s operators made sure to wipe out every last byte of information on the servers. In one case, a server in India was wiped clean mere hours before Kaspersky received permission to take it over.
Six months later, in May 2012, yet another new espionage malware was discovered and dubbed Flame. Unlike Duqu, Flame had very little resemblance to Stuxnet. At first, experts thought they were unrelated. But then a thorough analysis of Flame unearthed several small similarities between the two, just enough to convince the investigators that both viruses were likely made by the same people, or perhaps different teams working from the same organization.
As an espionage software, Flame is much more advanced and sophisticated than Duqu. For example, it had the ability to activate Bluetooth communication in a computer or on a phone, identify other smart devices in the area, and pull out contact lists and other relevant information from them. Just to give you an idea off Flame’s complexity: Stuxnet had 150,000 lines of code, which made it a HUGE virus—10 times larger than a typical malware. Flame, however, was 20 times bigger than Stuxnet, with some people considering it to be the most complex malware ever known. Andrew Ginter, the industrial system security expert we met in the previous episodes, does not believe that we will ever get a full analysis of Flame.
“I have never seen a complete analysis of Flame. I’m not sure anyone had ever done that. It’s just a lot of code to reverse-engineer. It took the Symantec team of, I think, four engineers, four or five months to reverse-engineer Stuxnet. Imagine how many people you need to do something 10 times as big.”
All over the world, the number of cyberattacks against commercial, government and industrial targets is on the rise. Russia, China, North and South Korea, Germany and India are all known to have governmental cyber warfare units, and together with the discovery of Duqu and Flame it is rapidly becoming clear that cyberwarfare is going to be an inseparable part of future wars and conflicts. Blake Sobczak, the IT and Security reporter for EnergyWire we met in the previous two episodes, puts it this way.
“Critical infrastructure operators in the U.S. have fallen victim to much more simple attacks than Stuxnet, and have been infected by potentially malware that is laying latent. Several officials in the U.S. have said there’s a strong chance that there’s malware already existing on control networks that might be able to do some damage. The extent of that is not always clear. Much of it is classified, a lot of people have signed nondisclosure agreements, talking about it might be considered security sensitive for all the obvious reasons.”
The Best Defense
The discovery of Stuxnet and its sisters encouraged experts in the industrial sector to seek better protection against future threats. In the first few days following the exposure of Stuxnet, Ginter thought he could use it to categorize the effectiveness of different types of antivirus software. He obtained a copy of Stuxnet and tested it against a number of different types of antivirus software and firewalls, trying to learn which would be the best defense against the threat.
“I tried Stuxnet against a number of defense systems, and I drew conclusions, saying ‘This defense would work, that one would not. That means this defense is stronger than that one.’ In hindsight, that was nonsense.”
Why is Ginter calling these early attempts “nonsense?” Because he realized that what he was actually looking at was a weapon designed for a very specific target.
“The Stuxnet worm was written to attack one specific site in the world, and it was designed to evade the security measures in place at its target. And so, there was nothing in the worm to evade whitelisting, but there was stuff in the worm to evade AV. If the site had used whitelisting, the worm would have included measured to evade whitelisting. There were, apparently, in hindsight, AV systems deployed at the site, and so the worm was designed to get around them. So the fact that some mechanisms worked against the worm and others didn’t, was more a reflection of what was deployed at the target site.”
In other words, Ginter claims that since Stuxnet was specifically created to attack the defense systems of the Iranian facility, we cannot assume that it accurately represents other threats. Stuxnet is like a bullet designed to penetrate a bulletproof vest made by a very specific manufacturer. The fact that it might actually penetrate the vest without any difficulty does not necessarily mean that the vest isn’t effective against other types of bullets—or that other kinds of bulletproof vests are somehow better.
If we continue down this train of thought, we arrive to a very alarming conclusion. If malware can be designed to successfully penetrate a specific defense system—as was done by Stuxnet, for example—what can we do to defend ourselves against these kinds of threats? You might be surprised to hear this, but Andrew Ginter says that basically, there’s nothing we can do. If someone really wants to penetrate your bulletproof vest, and is willing to invest all sorts of resources to do it—you’re in a world of trouble.
“If we have an adversary coming after us, and that adversary has detailed inside knowledge of how our systems are secured, in a sense—there’s nothing we can do. You are never completely secure. Which means no matter what we have deployed, we can always be hacked by an adversary that has enough time, enough money and enough Ph.D.s to throw at the problem. If you have an adversary that advanced, there’s nothing you can do. This was the nonsense. If we have that class of adversary coming after us, we don’t have a cyber security problem—we have an espionage problem, and we need to escalate [the problem] to our national security agencies.”
This is a very worrisome conclusion indeed. If true, then we might be doomed to live in constant fear of an enemy shutting down our electricity, water or gas supply. Luckily, Ginter has a more reassuring message as well.
“This is one of the big pieces of confusion that I try to clarify. In theory, there’s nothing you can do, from the point of view of any one facility. In practice, if we’re talking about protecting an entire industry—there is absolutely stuff you could do. No adversary in the world has enough money to buy every citizen of the United States, and compromise all of its industries. That’s just ludicrous. And so what we can do to protect an industry is put enough security in place so that in order to compromise an industrial site, it has to be escalated to a level of espionage attack. And if you do that, nobody has the means to carry out that sophisticated against an entire industry. What we want to do is elevate the problem from something as easy as hacking into a power plant from the comfort of my basement on the other side of the planet, to something where I have to compromise feet on the street, in my target, and have them do my attack for me. That’s a much harder problem. It is absolutely possible to elevate our security posture to the point it takes a physical assault to breach the cyber security posture.”
In other words, while it’s true that one can develop very sophisticated malicious software and shut down a facility or a power station—actually doing it is extraordinarily expensive. Remember that in Stuxnet’s case, someone had to find out which control systems were installed in the uranium enrichment facility, which microchips controlled the engines, what the electronic defense systems against malicious software were, etc. If someone wanted to attack all the industrial facilities in Iran—he or she would need to invest the same amount and effort for each facility. Creating a single Stuxnet is doable—making a thousand such weapons is a totally different matter. It’s a notion shared by Blake Sobczak.
“It’s very tempting to say that anything is possible. When you look at something as incredibly written and almost frighteningly executed as Stuxnet, it’s tempting to say, ‘well, who’s to say that the lights aren’t going to shut off tomorrow in the U.S.?’ Some hacker in the Ukraine or whatever is behind it. Oh my God! There’s almost the temptation to rush to panic. But if you take the time to reach out and talk to sources both in government and the technical community, they’ll say, ‘well, yes, there are vulnerabilities, there are countries and criminal elements willing to target these vulnerabilities, but is it so easy as to just flip a switch or hit a button and take down a critical infrastructure? Certainly not.’ This is an important aspect of the Stuxnet case. This was not easy to pull off.”
Let’s run through all that we learned in the last three parts of this series. Stuxnet was a sophisticated malicious software, perhaps the most sophisticated malware of its time. But it’s safe to assume that it will be remembered not just for its technological sophistication, but because it was a new type of weapon, the first of its kind, in the emerging virtual battlefield of the Internet. Stuxnet served as a frightening demonstration of what can be achieved when a technological superpower is willing to invest major financial and intelligence resources in cyber warfare.
By the way, we should note here that this demonstration might have actually been a part of Stuxnet’s purpose. Ralph Langner doesn’t believe that Stuxnet’s exposure in 2010, after several years of uninhibited activity, was a coincident. Or at least, its creators didn’t lose any sleep over it.
“Somebody among the attackers may also have recognized that blowing cover would come with benefits. Uncovering Stuxnet was the end to the operation, but not necessarily the end of its utility. It would show the world what cyber weapons can do in the hands of a superpower. Unlike military hardware, one cannot display USB sticks at a military parade. The attackers may also have become concerned about another nation, worst case an adversary, would be first in demonstrating proficiency in the digital domain—a scenario nothing short of another Sputnik
moment in American history…
Only the future can tell how cyber weapons will impact international conflict, and maybe even crime and terrorism. That future is burdened by an irony. Stuxnet started as nuclear counter-proliferation and ended up to open the door to proliferation that is much more difficult to control—the proliferation of cyber weapon technology.”