Season 2 - The Invisible War / Episode 13
Malicious Life, episode 13: Weapons of Mass Disruption
The threat of fire and fury stands at the center of all modern conflicts- nuclear bombs that can eradicate life in seconds are the ultimate weapon of war, as they pose a huge threat to centers of population. But what of the cyber war? What threat could it possibly pose to life as we know it?
An episode about vulnerabilities in the power grid, with guests: Congressman Jim Langevin, Yonatan Striem-Amit, Graham Cluley, Paul Brager.
Born in Israel in 1975, Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.
In 2007, created the popular Israeli podcast, Making History, with over 10 million downloads as of Aug. 2017.
Author of 3 books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.
Hello and welcome to Malicious Life. I’m Ran Levi.
“The widespread failures provoked the evacuation of office buildings, stranded thousands of commuters [. . .] Thousands of subway passengers in New York City had to be evacuated from tunnels, and commuter trains also came to a halt. [. . .] the police were evacuating people trapped in elevators. [. . .] Telephone service was disrupted [. . .] Cash-dispensing teller machines were also knocked out, so people who did not have cash on hand could not buy flashlights, batteries or other supplies. [. . .] For people with medical problems, the blackout added another layer of anxiety. Emergency rooms were flooded with patients with heat and heart ailments. At Harlem Hospital, a spokeswoman said that a number of pedestrians had been hit by cars because traffic lights were out. [. . .] So there was no air conditioning, no television, no computers. There was Times Square without its neon glow and Broadway marquees without their incandescence — all the shows were canceled. So was the Mets game against the San Francisco Giants at Shea Stadium. And there was a skyline that had never looked quite the way it did last night: the long, long taut strings of the bridges were dark, the red eyes that usually blink at the very top not red, not blinking.”
This, of course, was a passage from Cormac McCarthy’s Pulitzer prize-winning novel “The Road”–a post-apocalypse story of a father and son traveling across the United States following an extinction event.
Okay, I lied. That wasn’t fiction at all. Those were excerpts from a New York Times article on the 2003 Northeast power outage. But you believed me for a second there, didn’t you?
The 2003 Northeast Blackout
In 2003, an entire power grid from the Midwestern United States, up through the Ontario province of Canada, and over to the American East Coast went down, cutting light and water for upwards of 50 million people. The incident was a lesson in how a connected world is liable to face collective problems, and how more reliance on large-scale technologies means more vulnerability to large-scale problems.
The 2003 Northeast blackout was initiated by a tiny software bug in the computers of an energy company in suburban Ohio called FirstEnergy. This bug froze FirstEnergy’s alarm control systems for over an hour, but FirstEnergy’s engineers failed to notice it because, well, they had no alarms to warn them. As the machines stalled, unprocessed events in the system queued up, like a long line of people waiting to voice their complaints, ultimately shutting down their servers within half an hour. Then, in turn, the backup servers went down. With no visible signs of any problems, FirstEnergy dismissed a call from American Electric Power regarding a downed 345 kV power line. That line turned out to have overheated from an excess of demand, causing it to sag into a group of untrimmed trees and shutting it down entirely. Its electricity then had to be distributed to other lines, which shut down in turn.
As the problem webbed out from suburban Ohio, connected yet independently-run systems around the Northeast experienced the fallout of these original few faulty lines, and coupled with the lack of coordination between managing companies, whole plants began to shut down in order to save their own equipment from damage. In all, the innocuous problem that began in little Eastlake, Ohio would almost entirely shut down an area equivalent to a medium-sized country.
The obscurity of this whole conundrum initially led to speculation about whether it could have been the work of hackers with bad intentions. In New York City, with tensions still lingering from 9/11, the ambiguity of the problem led the NYPD to act in accordance with its terrorism prevention measures. Many others thought it could have been the work of Chinese hackers–a theory that got quickly debunked. The reality was that no one was aptly prepared for an event like this, so blame was being thrown about in the face of what was really just confusion. Some in government even blamed Canada for having caused the issue, presumably because…well… It’s Canada! What are they gonna do about it?
But the speculation about what happened is itself important for one, main reason: all those things we could imagine as having caused it could very well have caused it! “Chinese hackers” is an easy scapegoat, but the fact is we couldn’t prove that theory wrong until we did. It very much could be that the next major blackout in the U.S., or anywhere else, could be caused by hackers.
A Very Tempting Target
So, Let’s start with a simple question. How likely is it that our power grid has already been breached, and that the enemies of the United States are running spyware and other malware in our networks? We interviewed a number of experts in the field, and asked them all this same question. Every one of them, without exception, gave the same answer.
[Yonatan] I would assume it’s very likely.
[Paul] It’s highly plausible and as a matter of fact, it’s probably an absolute.
They’re probably right. The power grid is a very tempting target for cyber attacks: it is essential for the proper conduct of the state on the one hand, and on the other, it’s very complex and difficult to defend. The US power grid consists of more than 7000 power plants, 55,000 substations and millions of miles of transmission lines at different voltage levels. More than 3000 companies and organizations maintain proper power supply in real-time, seven days a week, 365 days a year. This technological and organizational complexity, combined with the extensive geographic dispersion of the electricity transmission system, creates a very wide surface area for a cyber attack, making the protection of the system awfully difficult.
But the main question I want to ask in this episode is – to what extent is the electric grid really vulnerable? After all, the people who build and maintain the electricity grid are not oblivious–they know that a cyber attack is not the only threat to the network, and perhaps not even the worst. Natural disasters and malfunctions are a constant spectre for the proper supply of electricity, so the grid was designed and built to take into account these challenges. For example, when hurricane Harvey hit the southeastern United States at the end of August 2017, it shut down several power plants and collapsed thousands of electricity poles, leaving millions of people all over Texas and Louisiana without electricity. But on September 9th, less than twenty days later, electricity was restored to 96 percent of consumers. So maybe our electricity grid is more resilient than we give it credit for? Does the electricity grid have any characteristics that make life difficult for anyone trying to harm it?
Until recently, this question was entirely theoretical. Although our electrical system has relied on electronic command and control systems for half a century now, no one was trying to down it back in the 1960s. To the extent that there were hackers who broke into electricity utilities’ computer networks, none were nation-state actors trying to damage the electricity grid and disable the power supply to consumers.
But on December 23, 2015, we had a rare glimpse into the future: the first documented case of a cyber attack on an electrical power facility that led to a loss of electricity.
The Electricity Grid
The Ukrainian power grid, like electricity grids in many other countries, is built of layers. At the top of the pyramid there are a small number of power plants, which contain generators that produce electricity. At the bottom end of the pyramid are the consumers. Between them is a complex network of transmission and distribution substations, whose function is to transport electricity from the power stations to consumers, and to ensure that the load is uniformly distributed throughout the system. Ukraine is divided into 24 regions, and each of them has a company that is responsible for the distribution system in its region. Imagine a tree: the power plants are the trunk, the consumers are the leaves, and the distribution companies are the branches.
Yonatan Striem Amit is the CTO of Cybereason, and he is here to explain to us the fundamental architecture of a computer network in an industrial plant such as an Electric Distribution Substation, and the information security challenges that come with it.
[Yonatan Striem Amit] Most power grids are built as a classical manufacturing operation, which means they have an IT network which is where they operate, which is where the management and people work day to day, and they have an OT network which is the industrial control system – industrial control mechanism that controls all the power grid information.
So the biggest impact, the biggest challenge is the air gap challenge, being able to hop over from the IT network, which is often connected to the internet, to the OT network often disconnected completely.
Now, put yourself in the shoes of an IT manager in a power grid. THe fact that he has two separate networks impacts his ability to operate on a daily basis. Everyone, for example –
[Ran]: It makes it difficult to operate.
[Yonatan]: Makes it difficult.
[Yonatan]: There’s a constant probe to say, let’s connect, let’s make everything more open and make everything more easy to use. Of course, once he does that, if he’s not designing for security, then they have – the work for the hacker becomes so much easier.
For example, if – you know, as an example, somebody passes USB sticks every day, every morning between computers. When someone puts this USB stick in this IT-based computer, copies some files and then plugs the same thing to his OT network, naturally the gap becomes much, much, much narrower.
IT managers in the Ukrainian distribution companies faced the same dilemma described by Yonatan. They wanted to keep the business computer network and the industrial network disconnected – but their engineers and technicians wanted easy access to devices connected to the industrial computer network so they could operate it and fix remote failures. In the end, convenience prevailed: workers were able to connect to the industrial network from the business network, and even from their homes. The Air Gap between the sensitive industrial network and the business network was effectively nullified.
In April 2015, an unknown number of employees in three such distribution companies fell victim to a spear phishing attack. E-mails containing Word and Excel documents were sent to a large number of employees, and those who fell for them and opened the documents had a malware called BlackEnergy installed on their computers.
Although it has the word “energy” in its name, BlackEnergy is not an attack tool specifically designed for energy infrastructures: it began its life in 2007 as a malware used in DDoS attacks. Its developer, a hacker named Cr4sh, sold the source code to someone else for $700, and the malware moved through the black market for several years. In 2014, it fell into the hands of a group of hackers who turned it into an industrial espionage tool and used it to attack a number of Ukrainian companies, mostly in the field of rail transportation and communications. These or perhaps other attackers also tried to use it against a number of power utilities in the United States, but without success. Ukraine was another story.
With BlackEnergy, the attackers spent many months collecting detailed intelligence about the Ukrainian electricity system. They penetrated the computers of engineers, technicians, and managers, and from the documents they found learned everything they could about the electricity grid, down to the most intimate details, to draw from it its weak points. In particular, they found that employees of the companies did not use Two Factor Authentication when they connected remotely to the industrial control network computers. This means that all the attackers needed to penetrate the industrial network were credentials – the username and password – of several employees with access privileges to the system. BlackEnergy’s keylogging functionality made it easy for them to get these credentials.
The intelligence gathering phase lasted about six months and on December 23, at 3:35 pm the attack itself began.
In the control centers of the three distribution companies, technicians were surprised to discover that their computers were not longer under their control.
- So what is he doing right now? What is he waiting for?
- He’s trying to reach the section breakers.
- What, is he trying to switch them off? Ah, this is an attempt to switch off a 110kW section breaker.
Mouse cursors on their computer interfaces started to move as if on their own, activating menus and pressing buttons in the control software, opening breakers in the distribution substations, one by one. A number of astonished employees took videos of what was happening using their mobile phones.
- Well, what – he’s trying the same thing again with the 110kW circuit. We need to call the IT guys.
- What’s if it’s the IT guys doing this?
A few minutes later, about a quarter of a million citizens had neither light nor heating. In one case, the electricity even disappeared in the control center itself, leaving the stunned technicians in a dark, quiet room.
Next came the ‘Burning the bridges’ phase. Breakers in distribution substations are controlled by devices called Serial Converters. These are devices that translate the commands received from the control center via ethernet, into serial signals that the industrial control devices can understand. These Serial Converters are controlled by firmware. The attackers deleted the original firmware and replaced it with their own firmware that prevented the technicians in the control center from accessing those breakers and closing them again. If we think of a Serial Converter as a sort of bridge between the ethernet and control devices, you now know why disabling them is equivalent to burning bridges…
At the same time, the attackers used a malware called DiskKill, which erased important files on the hard disks of computers in the control centers, and totally destroyed their operating systems. Much like the sabotage of the Serial Converters, this action also made it harder for the technicians to regain control of the system, while at the same time obscuring any traces of the attackers.
Finally, as the final step, the hackers launched a DDoS attack against the distribution companies’ telephone service centers: thousands of false calls caused phones to ring nonstop, preventing service to the hundreds of anxious and angry customers wanting to report the outage and receive information.
I suppose some of you are probably asking yourself: how did the attackers manage to take control of the industrial control network so easily, and replace the firmware code of critical components? We posed this question to Paul Brager, Technical Product Security Leader at Baker Hughes, a GE Company.
[Paul] We spend a lot of time working with – being concerned about physical attacks. You know, bombs, a terrorist walking into a facility and blowing them up. Someone driving a car into something and blowing them up.
Because of that, many of the industrial components and industrial tools that are out there are not – they were built with that in mind. They were built more with resilience in mind as opposed to actually cyber. Now that there has become more a need for data to be transferred out of those environments and be used as part of business decisions, obviously you’re starting to enable an infrastructure that wasn’t traditionally designed to be that way.
You see that in oil and gas. You see that in chemicals, in manufacturing. You see that in waste water and wastewater management and infrastructures again that are vital not only to American’s national interests or national security interest, but certainly vital to the very survival of the American citizens.
So as I had mentioned before, because the systems were not designed originally to be internet-facing, they often don’t have a lot of the common safeguards that you would expect in general IT systems. The software that runs on many of these components may have buffer overflows. They may have inherent vulnerabilities that the development teams didn’t have to worry about because there was no expectation with these systems, whatever, to be exploited or ever be interacted with other than what’s standing physically in front of it.
Graham Cluley, a British security researcher who has been on our show several times, explains that part of the problem is the fundamental mindset of energy companies that prioritize availability over security.
[Cluley] So it works like this. With a regular company, the most important thing of all is confidentiality. They want to keep the details, the data, the payment card information, the passwords. They want to keep that all absolutely top secret. So that’s at the top of the triangle.
Then they’re thinking about, well, we need to maintain the integrity of the information. We need to make sure that somebody isn’t coming in and altering it in some way because that would obviously be damaging as well.
Finally, the thing at the bottom is availability. It’s – well, it doesn’t actually matter if it goes down for a couple of hours. But we want it to be available most of the time. So that’s that little pyramid. But what happens when you look at industrial control systems and energy grid and places like that? You’ve got to turn that triangle upside down. The most important thing for them is that the power never ever goes off. So right at the top is availability rather than that being at the bottom, which is – the case is with most businesses.
Then you’re dealing with integrity and finally you’re dealing with confidentiality. So security is turned upside down and that as a consequence means that those organizations, the industrial organizations, the last thing they want to do is reboot the system or apply a patch or change anything because hey, it’s working at the moment. Let’s not mess with it. Because if every time we change something, there’s a chance we might break it and we just got to keep it going all the time.
So security in terms of computer security has been less of a priority for them. Now of course they’re getting targeted. Now of course, there’s this constant pressure of, well, we need some way of using computers to control these systems and you begin to see more and more of these systems being integrated and there are increasing opportunity for hackers to actually get in and potentially mess with these systems as a result and that is the huge challenge, which they face.
To sum up Paul and Graham’s words, energy companies face a dual challenge: they operate legacy control equipment that was not designed with Information Security in mind and is difficult to replace. In addition, they are committed to near-perfect availability that greatly limits their willingness and ability to upgrade or redesign the system to be more secure.
A Quick Response
But to the Ukrainian Distribution companies’ credit, their response to the attack was unusually quick and efficient. When the engineers and technicians realized what was happening, they quickly drove down to the actual distribution substations themselves and closed the breakers manually. In the end, the technicians managed to restore electricity to all the affected areas within only six hours. In a detailed study of the event published by the SANS Institute, an organization that specializes in cybersecurity training, the researchers note favorably the function of workers during the event –
“In many ways, the Ukrainian [companies] and their staff, as well as the involved Ukrainian government members deserve congratulations. This attack was a world first in many ways, and the Ukrainian response was impressive with all aspects considered.”
Who is responsible for the attack against the Ukrainian electricity grid? Normally, security experts tend to assign very little importance to attribution questions: the prevailing view is that in the world of information security, where an attack can literally come from anywhere, the identity of a specific attacker is almost unimportant. In this case, however, this question has an important bearing on the question with which I began the chapter: How vulnerable is our power grid?
How Vulnerable Is Our Power Grid?
Superficially, it’s obvious that the attack on the distribution companies was done with relatively simple malware: spyware and attack attack tools like Black Energy and DiskKill, which are easy to find online. Moreover, the opening of the breakers and the disconnecting of electricity to consumers, the most important part of the entire attack, was done “manually”, by remotely moving a mouse cursor to press buttons on a screen. Compared to the sophistication demonstrated by Stuxnet, the worm that attacked Iran’s uranium enrichment plant, the attack on Ukraine’s Power Grid looks like child’s play.
Is it possible to conclude that your average hacker could do the same to the US power grid, and that in a case of a cyber attack, one could expect widespread power outages? Not so fast. While it is true that the tools used by Ukraine’s hackers were quite simple – the real sophistication of the attack was its planning and execution.
First, we should remember that this is an attack on three different distribution companies which, although they share a certain similarity in equipment and operating procedures, are still different companies, each with a unique control system. The attackers probably spent thousands of hours analyzing these distinct systems, building an intelligence picture and identifying their weak points – and it’s unlikely that criminals would invest so much time and effort with no clear financial incentive. Second, writing the firmware code that destroyed the Serial Converters–the bridges between the Ethernet network and the mechanical breakers–requires considerable skill and intimate familiarity with these devices, which are rarely used outside of the industrial world. It is also reasonable to assume that the attackers did not rely solely on luck, but rather conducted early testing with the same devices to make sure that the new firmware would work at the moment of truth. Again, learning such specific skills and setting up elaborate test benches isn’t something a criminal hacker is likely to do. And finally, the attack itself was without a doubt the orchestrated and well-rehearsed action of a number of trained operators, much like an elite SWAP unit. In other words, this is not the work of an “average hacker,” but an act of a nation-state actor.
And what about the simple malware tools used by the attackers? As some analysts pointed out, BlackEnergy and DiskKill were not central to the attack: the group that attacked the Ukrainian electricity network collected preliminary intelligence, and according to this intelligence decided to use the simplest tools to do the work – a decision that every engineer and every soldier can stand behind.
Indeed, according to intelligence assessments published in the press, the group responsible for the attack in Ukraine is a Russian hacker collective called The Sandworm Team. In the United States and Europe, Sandworm is known for its espionage campaigns against targets in NATO and Western European governments, but it is difficult to prove its connection to the Russian government. It is also difficult to say for sure what the purpose of the attack itself was: Russia and Ukraine have been at odds for years, especially since Russia annexed the Crimean Peninsula in 2014. The attack may be aimed at weakening the trust the Ukrainians have in their government, or it may be a signal for the United States and other potential enemies of Russia – ‘Do not mess with us, see what we can do to your electricity grid.’ At the end of the day, what’s important is that it’s probably the work of nation state actors rather than amateurs – which means that perhaps the power grid is not as vulnerable as it seems to be at first glance.
The fact that the Ukrainians managed to restore power in just six hours also illustrates another challenge for those who want to attack an enemy country’s electricity grid: the difficulty of predicting the outcome of the attack. When you send airplanes to bomb a power station, you can be sure that if the bombs hit their target the result would be one: the power grid is shut down. But when it comes to a cyber attack, the result is not so sure. If the technicians and engineers on the other side are as talented and quick-thinking as the people of the Ukrainian distribution companies probably are, they may able to shake off the attack quickly and the damage will be less than you expected. The attacker must plan their attack in advance, in the most precise way possible, to disable and destroy critical equipment to where it can not be repaired or replaced quickly. That is not easy.
The bottom line, then, is that even though our electricity grid seems fragile – it might not be, at least as much as you’d think. You need quite a few resources, skills, intelligence and planning abilities to carry out a successful attack, and even then it is difficult to guarantee a successful outcome. And if we consider that the Sandworm Team invested all this effort to bring down just three distribution substations in Ukraine for only six hours’ time, one could assume that a large-scale attack shutting down the electricity system of a vast country such as the United States may not be an insignificant task, to say the least.
But in spite of everything I’ve told you so far – thing can and will change, and change fast.
A cyber attack against the electricity grid is not something countries should underestimate. As the 2003 Northeast blackout has shown us, even a relatively local interruption to the electricity grid of a large metropolitan area – an action of a magnitude not very different from the attack on the electricity grid in Ukraine – can cause considerable chaos and damage.
The reason that countries can not afford to ignore this danger is that an attack on the electricity system is, functionally, an act against all the vital infrastructures of a modern state. If a power outage lasts long enough and the fuel in backup generators runs out, the infrastructures that depend on it begins to fall one by one like dominoes. Water pumps stop working, gas pumps at stations stop and the cellular communication disappears. Without water, fuel and communications – health systems and law enforcement cease to function. Banks close and economic activity stops. There is hardly any area of life in a modern state that is not completely dependent on uninterrupted power supply, so the stakes are simply too high. Damage to the electricity supply is equivalent to a bullet straight into the beating heart of the country: you don’t have to hit anywhere else for all the other organs to cease functioning.
Minimizing The Risk
So what can we do to minimize this terrible risk? This is a big and complex question, to which we should probably devote a separate episode of Malicious Life. Paul Brager, Baker-Hughes Product Security Leader, says that a big part of the solution is to create a common language between people who are responsible for the ongoing operation of the electricity infrastructure, and corresponding information security experts.
[Paul] When you’re a control operator, you’re looking at a certain set of parameters and as long as those parameters are correct and the solution appears to be operating within those parameters, then you’re typically not going to go any further to try to decipher whether or not there’s something else going on. A, they don’t have the time. B, they typically don’t have the people and C, they typically don’t have the budget or expertise to do that on the control side.
So because of that, you’re starting to see a lot more interaction between control operators and people that are in kind of – considered to be the OT environment and traditional IT cyber. But they don’t talk the same language. So there’s a lot of effort. We try to bridge gap between the two – and make it not an adversarial situation, which is what it has been in many, many years – and make it a lot more cooperative and collaborative. But it’s a work in progress and while that work in progress is happening, the United States again has a lot of enemies and the critical infrastructure components that we use are the same critical infrastructure components that are maybe used in another nation, even including Russia as a matter of fact or North Korea or anywhere else.
Yonatan Striem Amit, Cybereason’s CTO, underscores the importance of information sharing between the different energy companies.
[Yonatan Striem Amit] So politicians today have a huge role in making sure that all commercial ecosystem and government ecosystem work together. In cyber security, one of the key elements that are again and again repeating, is the information gap. Put yourself in the shoes of an executive, of a company that’s just been attacked. You know, your responsibility to your shareholder of maximizing the value is saying, I need to minimize the impacts of an attack potentially by not disclosing it to the public. Trying to keep it secret, to minimize what happened, trying to minimize the impact.
That would be quote-unquote, “responsible” from a very micro localized set of interests, but from the global good perspective, or even the commercial entity as a whole, the commercial entity of the world as a whole, sharing the information with the right people in technical spectrum so that the industry as a whole can adapt quickly, can change, can kind of immunize us against these kind of threats as best as possible, is in everyone’s best interest. Politicians had to create an ecosystem in which this kind of information sharing is not only imperatives, even mandatory.
[Ran] So they have the interest of the state and they have to make the hacked company, the private company, want to disclose that information to everybody? It’s not an easy job.
[Yonatan] It is very challenging, one of the easy – one of the best way to do that is to create kind of close forums for industry experts, industry companies where sharing is kind of safe from the public outcry, in a sense. The other is of course creating an ecosystem where you have to share that information as part of public companies requirements for the public.
[Ran] So that kind of secure forums, this is the place for just to get – to clarify what you’re saying, this is a secured place for companies or like IT experts within the companies to share information of their being hacked, or maybe vulnerability they discovered, so that other companies would be aware of those vulnerabilities for example, right?
[Yonatan] Precisely. So in the states, we have a lot of what’s called ISAC information sharing groups. We have the financial services ISAC, we have the healthcare ISAC. These are self-created information sharing bodies within the industry. For example in the financial services ISAC, FS-ISAC, people on the security staff within companies feel free to share with each other information about the attacks, knowing in order to get into these forums, you have to be a security practitioner in one of the few select banks. So they feel that they’re open on services companies, they feel more safe to share.
Making this a governmental based requirement and making sure that that security experts from the industry are invited in, will have a very impactful outcome on our ability as a whole industry to adapt quickly to cyber threats on information system.
These two initiatives – improving communications between operations and IT, and improving the flow of information between companies and organizations – are probably only a fraction of the actions countries need to make to ensure their power grid is safe from cyber attacks. In the meantime, it seems that such an attack requires a relatively high level of skill and sophistication – but the tools and software used by the attackers are constantly improving. Will we be able to secure our electricity grid in time?
Uh … Nate? Nate? Turn on the light, Nate, it’s not funny. I can’t see anything. Nate? is there anyone here? Nate! …
That’s it! I Thank you for listening, and thanks to all of you who emailed and twitted to me to say how much they enjoy the podcast – to Lenny, Mike Waller, Paavo, Daniella Ristovski and many others. Thank you very much. Visit Malicious-DOT-Life to subscribe to our podcast, read all the full transcripts and download other episodes. If you like the show, leave us a 5 star review on iTunes and we’ll send you a Malicious Life t-shirt. You can follow us on Twitter at @MaliciousLife, my personal twitter handle is @ranlevi and you can write to me at [email protected].
Malicious Life is produced by P.I.Media. Thanks again to Cybereason for underwriting the podcast. Learn more at Cybereason.com. Bye Bye.