Season 1.5- Stuxnet / Episode 7
Where armies once fought with bullets and bombs, they now engage in clandestine, invisible warfare. In 2010 a virus was discovered that would change the world's perception of cyber warfare forever. Dubbed Stuxnet, this malicious piece of code has a single focus- to stop to development of Iran's nuclear program. Part one of this three part series sets us off in exploring the first major battle of the cyberwar- the Stuxnet worm.
With special guests: Andrew Ginter, and Blake Sobczak.
Hosted By
Ran Levi
Born in Israel in 1975, Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.
In 2007, created the popular Israeli podcast, Making History, with over 10 million downloads as of Aug. 2017.
Author of 3 books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.
Special Guest
Andrew Ginter
Andrew Ginter is vice president of industrial security at Waterfall Security Solutions. He has managed the development of commercial products for computer networking, industrial control systems and industrial cyber security for vendors such as Develcon, Agilent Technologies and Industrial Defender.
Blake Sobczak
Blake covers security for EnergyWire in Washington, D.C., focusing on efforts to protect the oil, gas and electricity industries from cyberattacks and other threats. He writes about how hackers have applied their tools and techniques to break into -- and often break -- physical systems. His reporting has taken him to China and Israel for firsthand looks at how global powers approach cyberwarfare.
Episode transcript:
Hello and welcome to Malicious Life – I’m Ran Levi.
Several years ago I decided to write a book. By that point, I had already written two other books and knew from experience that it’s easy to get distracted by fascinating research that isn’t necessarily relevant to a book’s theme or thesis. This particular book was on the history of computer viruses—also known as Malware — but because many new viruses appear around the world every year, choosing the viruses that would make the final list wasn’t an easy task. Yet from the very beginning it was clear to me that one of the chapters would be dedicated to a malware called Stuxnet – the subject of this episode
Stuxnet was discovered in 2010 after an unprecedented attack on an Iranian uranium enrichment facility. At that time, it was the most complicated and sophisticated malware ever known. Yet it wasn’t its technological sophistication that made it so prominent in the relatively short history of computer viruses. The Stuxnet attack was a terrifying display. It illustrated to us how weak and exposed the industrial infrastructures that we all depend on are to cyber attacks—including nuclear reactors and petrochemical factories. But it was more than a wake-up call—it was more like the bellowing horn of an approaching train. After discovering Stuxnet, we were forced to ask ourselves—how many more potentially devastating viruses are out there?
Advanced Persistent Threat
Not that there weren’t any sophisticated computer viruses before 2010. There were quite a few. But Stuxnet targeted a very specific niche of the computerized world, a field that most of us aren’t familiar with and aren’t exposed to on a daily basis—computerized industrial control systems. For that reason I have invited Andrew Ginter, an expert in IT security, to be our guest.
“My name is Andrew Ginter, I’m vice president of Industrial Security at Waterfall Security Solutions. I’ve got a long history of writing code, mostly for industrial control systems […] In a sense, I was part of the problem in the early days of connecting these system together. And then I got religious and now I’m trying to secure of these connections that we haphazardly connected together in the last 20 years.”
Like other computer users, Andrew and his programming colleagues were already familiar with computer viruses and the various defenses against them, like anti-virus software. But in the mid-2000s, a new and a more menacing threat emerged.
“In about ‘06 or ‘07 the American Department of Homeland Security started issuing a warning, saying there’s a new class of attack out there, this ‘Advanced Persistent Threat,’ where there are significant differences between that and threats people were used to.”
The new threat was an Advanced Persistent Threat, known as APT. In order to understand what APT is, and why it’s so special, we’ll have to take a step back and talk about malicious software in general. By the way, this episode will be divined to three parts, each of which will tackle a different aspect of this unique malware: its mechanisms, its creators, and its target.
Basically, malicious software is software that deliberately causes some sort of digital damage. For example, it might erase or steal important information. There are many types of malicious software, and a “virus” is just one of them. But for the sake of simplicity, we’ll use both terms—malware and computer virus—interchangeably.
The Internet is full of viruses stealing bank account information, installing unwanted software on computers and causing all sorts of trouble. But all those viruses share a common characteristic —they are not directed at you personally. In other words, rather than attack one specific user or computer, a virus will usually try to attack as many computers as possible at the same time.
An APT, on the other hand, is a targeted threat. It is a malware created with the intent of penetrating a specific system or organization. If we compare malware to bombs, then a “regular virus” is similar to the carpet bombings often seen in old World War II movies, where thousands of bombs are released over a large area in the hopes that at least a few will hit their targets. An APT than, is like a guided missile. It is so well tuned that it’s akin to a missile hitting a specific window of a specific building. The word “persistent” in the acronym APT refers to the human component in the operation of the malicious software: a human operator remotely monitors and guides the actions of the APT.
It seems that the United States government was thinking about APTs long before ’06-‘07. Blake Sobczak is a journalist covering IT & Security matters.
“My name is Blake Sobczak. I’m a reporter with Energy Wire, based in Washington DC. I write about efforts to secure critical infrastructure in the U.S. and abroad from cyber attacks.
“There’s evidence to believe that the U.S. was thinking about the possibility of attacks on critical infrastructure as far back as 1997, likely before. That was the date that there was a very famous war-game called ‘operation eligible receiver’, where the Department of Defense and the NSA sort of run this no-notice exercise. They surprised a bunch of military contractors with a huge cyber attack. The results of that particular exercise were so shocking that a lot of it, even though it was classified, leaked out into the public domain. And so that at this point the industry was aware of the threat.”
But as Andrew Ginter says, at the time, around 2005 or 2006, though a few APTs had already been discovered, all of them were focused on the business world, for the purpose of industrial espionage, stealing credit card information or hacking bank accounts. The industrial control systems world of factories and reactors, on the other hand, hadn’t yet faced the same threat.
“A lot of people followed this with some interest. Everybody likes to see things break, and here was a new way that people were describing how things can break. But nobody really believed they would really be the target of an Advanced Threat. All these Advanced Threats was happening on IT networks, business networks. You know, stealing credit cards, account information. There had not been a worm or a virus or anything before specifically targeting control systems components in the wild. At least not that was reported, none that I was aware of. The kind of security incidents you saw before was stuff like insiders using their own passwords to access systems and cause malfunctions. The classic one that everyone was talking about was Maroochy, in Australia.”
The Maroochy Incident
Oh, yeah… the Maroochy incident. That was a rather smelly and unpleasant business, but we might as well look into it. It will help us better understand what industrial control is, and why it’s so important.
The Shire of Maroochy is one of Australia’s delightful treasures—a beautiful and serene rural area that attracts many nature-loving tourists. Maroochy has a local sewage system that handles more than 9 million gallons of sewage every day, using 142 sewage pumps scattered around the shire.
At the heart of the continuous operation of these pumps is, of course, a computer; to be more precise—a computer system called SCADA, which stands for Supervisory Control And Data Acquisition. The title is a mouthful, but the principle is rather simple. A computer gathers information from different sensors—like those that measure sewage levels—and turns the pumps on or off accordingly. Our domestic air conditioning system, for example, is a sort of SCADA. A tiny sensor located in the remote controller reports temperature inside the house to the AC main computer, which then tells the compressor to turn on or off.
In 1999, a man named Vitek Boden was supervising the sewage pumps in Maroochy, working for the company that installed the control system. Boden, then in his forties, had been working for the company for two years until he resigned as a result of a dispute with his bosses. After quitting his job, he approached the district council and offered his services as an inspector. The council declined.
Shortly after, the Maroochy sewage system started having mysterious and seemingly random problems: pumps stopped working; alarms failed to go off; and worst of all, about 200,000 (two hundred thousand) gallons of sewage flooded vast areas. Rivers turned black, nature reserves were destroyed, countless fish and wildlife died, and of course, the local population suffered through the terrible stench for weeks.
Maroochy’s water authority hired experts to examine the problems. At first, the experts suspected that disturbances from other control systems in the area were causing the problems, or that there was an error in the hardware. After all the immediate suspects were investigated, the experts were helpless; time after time they examined failing pumps, only to discover new and intact equipment that would simply stop operating, seemingly for no reason.
Sometime later, an engineer working on the sewage system at around 11 o’clock at night, changed one the configurations in the control system. To his surprise, the change was reset and erased a half an hour later. The engineer became suspicious. He decided to thoroughly investigate the data traffic between the different pumps, and discovered that sewage pump number 14 was the one who had sent the order to reset his original configuration change. He drove to pump 14, examined it and its computer, and found them to be in perfect working order.
He was now certain that a human hand was behind the chaos in the system. He decided to set the hacker up. He changed the pump identification code from 14 to three, meaning, all legitimate orders coming from pump station 14 would now be received under identification code 3. He waited until the next error occurred, and then analyzed the data traffic. As predicted, the malicious orders still indicated they were coming from pump 14. In other words, someone had hacked the communication network of the pump and was pretending to be pump number 14.
Vitek Boden became the immediate suspect. Investigators assumed that he was penetrating the network remotely, via wireless communication. It was likely then that during an attack, Boden would be within a few dozen miles from the pump stations.
The water authority promptly hired private investigators that began tracking Boden’s movements. On April 23rd, at 7:30 in the morning, another series of errors occurred in the pump stations— but this time the trap set around Boden snapped. A private investigator noted Boden’s car on a highway not far from one of the pumping stations, and clled in the police. Boden was chased and arrested. A laptop with a pirated copy of the control system software and a two-way radio transmitter were found in his car.
At his trial, Boden claimed that all evidence against him was circumstantial since no one saw him actually hacking the control system. The Australian court wasn’t convinced. The circumstantial evidence was pretty strong; especially considering the radio equipment found in his car was designated for operating the control computers of the sewage system. The judge theorized that Boden wanted revenge after having to leave his job, or that perhaps he thought he could win his position back once he was called in to fix the “errors.”
A Wakeup Call
Vitek Boden was sentenced to two years in prison, and the crime he committed became a point of interest to IT security experts around the world. They thoroughly analyzed each and every one of his steps, and what they found wasn’t very reassuring. Maroochy’s control system wasn’t designed with cyber-security in mind. As is often the case in the programming world, finding engineering solutions to solve each immediate problem took priority over the less urgent need to secure data. One can guess that security wasn’t a top priority for the people who designed the sewage control system, since after all, they had enough s*** to deal with as it was…
Worse yet, the Maroochy incident was only the tip of an iceberg. Industrial control systems, such as the computer system that controlled the sewage pumps in Maroochy, are the foundation on which almost all of our industries and infrastructures are built upon. Millions of control systems are used to control a vast variety of industrial processes all over the world, from assembly lines to nuclear reactors, to making electricity. The ease with which Boden was able to penetrate Maroochy’s control system reflected, said the experts, how easily someone could disrupt the gas or electricity supplies to entire cities.
Still, in 2006, when the authorities in America warned against the possibility of a sophisticated malware attack against industrial control systems, it was still based solely in theory.
“And in 2010, when Stuxnet hit, it was in a sense the first very concrete example of an advanced attack, targeting industrial control systems. It was very big news.”
Stuxnet’s Discovery
The first report of Stuxnet was received by a relatively unknown anti-virus vendor from Belarus in Eastern Europe named VirusBlokAda, a company who supplied tech support and counseling to local and international customers. Sergey Ulasen, a programmer, and IT security expert, was a team leader in VirusBlokAda. On a Saturday in 2010, Sergey got a call from one of his clients from abroad, an Iranian company. Despite it being the weekend, and despite being at a friend’s wedding, Sergey answered the call since the company’s rep was also a personal acquaintance of his. While everyone else was drinking and dancing, Sergey stood in the corner and discussed the problem with his client.
The Iranian representative told Sergey that several of the company’s computers were crashing, presenting the infamous “blue screen of death,” familiar to anyone who has ever run into a critical problem with the Windows operating system. At first, Sergey assumed that it was a technical problem having nothing to do with malware—maybe a collision of sorts between two installed programs. But when he learned that the same problem occurred on computers with a new and clean installation of Windows—he began to suspect a virus.
When he came into the office on the following Monday, Sergey began looking into the matter. He remotely took over the Iranian computers, and rummaged through the guts of the operating system. Finally, he located the software that was causing the computers to crash. The way the unknown software was hiding itself among other files in the computer reminded Sergey of typical malware behavior—but the suspicious software also had one odd and unique characteristic, one that is never found in malware. It had a valid “digital certificate.” What is a digital certificate? Here’s Andrew Ginter.
“If I give you a new driver and say ‘here, install that on your machine’ and you try, the machine will say ‘That’s funny. The driver is signed by this certificate that I’ve never seen before. The certificate claims to be from abc.com hardware manufacture. Do you trust this certificate?’ If you say yes, there’s also a checkbox that says ‘In the future, don’t ask me this question.’ Just Go. If you click that, then the next time you see a piece of software signed by this vendor, you won’t be asked anymore—it would just install.”
In other words, a digital certificate is kind of like a long-term multiple entry visitors’ visa. The first time you enter a country, you might need a valid visa—but once you have it in your passport, you are free to come and go. Similarly, software with a signed digital certificate can install itself on a computer without warning.
Sergey knew that only distinguished and well-known companies could sign their software with a certificate like that. The software he was looking at didn’t seem like it was legitimate software made by a legitimate company. The fact that the malware was trying to hide in the computer was also suspicious, just like seeing someone at a mall wearing a big coat on a hot summer day.
A Crucial Discovery
But the question remained—if it was malware, how could it have a signed digital certificate? Sergey and his colleagues at VirusBlokAda argued among themselves for hours. But then Sergey made a crucial discovery that ended the discussion.
Sergey discovered that in order to install itself in a computer; the suspected malware was using a bug—a software error—in the operating system. The error allowed the malware, located on an infected USB drive, to install itself on a computer without the user’s knowledge or approval. The fact that the suspected software was exploiting a bug in the operating system was the smoking gun Sergey was looking for. No legitimate software vendor would ever exploit a bug in such a way. Sergey was spot on—it was later revealed that the digital certificates Stuxnet was using were in fact certificates stolen from the safes of two Taiwanese companies: Realtek and Jmicron. The identities of the thieves were never found.
Sergey realized that he was in over his head. VirusBlokAda is a small company focused on customer service and IT counseling; it didn’t have the funds and resources to thoroughly analyze the new malware. He contacted Microsoft to warn them about the newly discovered bug, and on July 10th, 2010, VirusBlokAda posted a press release on its website describing the virus as ‘very dangerous’ and warning that it had the potential to spread uncontrollably.
We should note though, that despite the threatening tone, these sorts of alerts were and still are somewhat routine in the anti-virus industry, since new viruses are discovered daily. What made this case special however was the discovery of previously unknown operating system software error, the bug that allowed the malware to install itself secretly every time someone attached an infected USB drive to a computer. Since it was unknown, there was no protection against the bug; Microsoft hadn’t yet released a patch fixing it. In other words, any malicious software designed to exploit this vulnerability could penetrate any computer that ran Windows, even if has updated anti-virus software installed on it.
Since such a new and unknown bug had the potential of infecting billions of computers around the world, it became big news. Several well-known IT security bloggers shared VirusBlokAda’s report, and Microsoft hurriedly announced that its engineers were working around the clock to patch the bug.
Meanwhile, several anti-virus vendors began to analyze the actual malware that Sergey discovered. As they dug deeper and deeper into its code, the analysts realized that this wasn’t your ordinary, run-of-the-mill computer virus. Instead, it was one of the most sophisticated viruses known to date—and it was targeting industrial control systems. In fact, this was the dreaded APT malware the U.S. government had been warning about since 2005. The name given to the new malware was Stuxnet, a combination of two random words in its code.
“People reacted almost immediately. There were entire sites that glued all of their USB ports shut on their control system network.”
“First and foremost, Stuxnet was a jarring proof of concept.”
“Before hand, it was ‘no one would ever come after us with a targeted attack. That’s for IT.’ Afterwards, it was ‘Oh no! Now what?”
“This was something that many cybersecurity researchers had been warning about for years, the possibility that a digital weapon could attack physical infrastructure. And Stuxnet was really the first big public evidence of that.”
“Certainly in the day, there was some panic.”
“And I think what made it so impactful and so shocking for so many people, was the fact that it was just so complex. In most cases, in the cybersecurity realm, you almost expect the initial stages of a new type of attack to come very slowly and gradually. Maybe start with something, say, DDOS on critical infrastructure. Just trying to shut things down, just trying to break things in a crude and basic way, not even by directly effecting the physical mechanism of the control system, but just by going after the computer, the software code that’s helping to run those control systems, and just trying to take them down and seeing what happens. The fact that Stuxnet was so intricate, highly targeted and carefully assembled—I think that was what really made it a wake-up call.”
A Guided Missile
“The Stuxnet worm spread on control systems networks when it was discovered. There was no anti-virus for it, there were no security updates to close the hole that is was using. There was no way to stop it. In the early days, nobody knew what it was, what its objective was, what its consequences would be. All they knew was that here is something that is spreading on control system networks, and there was no way to stop it.
One of the ways it spread was on USB sticks, so people took what measures they could. You know, it’s very bad news when something hits the industry, and you’re told ‘take precautions, you’re in trouble.’ What precautions should I take? I don’t know (laughing) that’s a very distressing thing to hear.”
Teams of several anti-virus vendors analyzed Stuxnet and published detailed reports. It was a tremendous effort. Stuxnet was a gigantic malware, in terms of the sheer size of its code: it had 150,000 (a hundred and fifty thousand) code lines—roughly 10 times more than the average computer virus. Stuxnet’s size is a testimony to its complexity, and was the reason it took several months to reverse engineer it. The majority of the Stuxnet’s code was dedicated to its spread and stealth mechanisms. But how do these mechanisms work?
Let’s say it’s Monday morning, and an employee returns to his office at some industrial company. He could be the company’s accountant or the inventory manager of the cafeteria. He sits at his desk and takes out a USB thumb drive. The drive might carry documents he was working on from home, and maybe some family photos. He plugs the USB drive into the computer and copies the files he needs to his desktop.
But what he doesn’t know is that the USB drive contains one more thing—a copy of Stuxnet. How did that concealed copy find its way to the USB drive? We can’t tell. Perhaps the employee’s home computer was already infected, or maybe some “secret hand” made sure the malicious software was installed on the drive. What we do know is that while the employee copies the documents from the USB drive, Stuxnet installs itself on his desktop in complete silence, without any warnings or alerts.
You may have also noticed that Andrew Ginter described Stuxnet as a “worm.” This term refers to a type of malware that is able to spread inside a network of computers without any human intervention—much like a biological virus spreads through the body using its blood vessels. From the moment it has installed itself on the first computer, Stuxnet is independent and is able to spread to other computers in the company’s network—the network that allows the accountant or the inventory manger to send emails and share documents with their colleagues.
What is Stuxnet looking for? Why is it skipping from one computer to another? Well, it has a clear target. Remember that we are talking about an industrial facility of some sort, perhaps a snack factory or maybe a power station. Such a facility usually has two distinct computer networks: a ‘business network’, and an ‘industrial control network.’ The computers of the accountant and the inventory manager both belong to the business network, where bank accounts details, or secret patents can be found…but Stuxnet is not interested in this sort of information; instead, it is looking for the computers belonging to the industrial control system—the system that controls the heat in ovens or the spinning of turbines. These computers will almost always be segregated and protected from the other computers in the organizational network, using several IT security measures like anti-virus software or virtual firewalls. Yet Stuxnet is sophisticated enough to breach these security measures as if they don’t exist at all.
How does Stuxnet know that it has infected a computer belonging to the industrial control network? The answer is that it tries to locate software called Step 7. Step 7 was created by the German company Siemens, and it is used solely in industrial control systems. Why this software in particular? All we can say right now is that Step 7 was not chosen randomly. Stuxnet’s creators even went as far as obtaining a secret password, which allows them to hack the software and use it to infect other computers. How did they get their hands on the secret password? Again, we have no clue. No one, except the programmers who developed Step 7, was supposed to know about the existence of such a secret password.
After hacking Step 7, Stuxnet begins looking for control equipment attached to the PC, called a PLC, which stands for Programmable Logic Controller. The PLC is a type of simple computer that “translates” and transports orders back and forth from a PC to the industrial machine it controls, like a generator, a conveyer belt or an oven. The PLC’s role is to manage the flow of information between the computer and the controlled machine. If Stuxnet finds such PLC equipment, and if several other vital conditions are fulfilled (which we will talk about soon), then the malware knows it’s found its target.
And then what happens? Well, imagine Stuxnet as a guided missile with two parts: the first is a thruster that leads the missile to its target, and the second is the payload that causes damage to the target. The entire mechanism we have described so far—an infection caused by a USB drive, the spreading through an organizational network, and the hacking of Step 7—was the thruster. Now that Stuxnet has found its target, it is time to activate the payload, the part of Stuxnet that makes it so unique, unprecedented and frightening; it is this part of the virus that will be the focus of our next part of this episode. In the next section, we will also learn about the damage that Stuxnet caused to the Iranian uranium enrichment facility. All that and more—next week, on Malicious.Life.