Season 2 - The Invisible War / Episode 19
Those of us who have experienced a cybercrime know the feelings of frustration and helplessness that come along with it. A hacker could be halfway across the world when they attack you, and you might have no way of figuring out who it was or catching them even if you could. But is there really nothing we can do?
Born in Israel in 1975, Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.
In 2007, created the popular Israeli podcast, Making History, with over 10 million downloads as of Aug. 2017.
Author of 3 books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.
Yonatan Striem-Amit, CTO and Co-Founder of Cybereason, is a machine learning, big data analytics and visualization technology expert, with over a decade of experience applying analytics to security in the Israeli Defense Forces and Israeli Governmental Agencies. Prior to founding Cybereason, Yonatan headed the development for Watchdox, a leading DRM and SaaS security startup.
When I was 15, someone stole my bicycle. One day I opened the garage door – and the bike was not there. Have you ever been a victim of theft or burglary? If so, you can probably identify with my anger, disappointment and humiliation at that moment. Someone just took my bike and left, and there’s nothing I could do to stop it.
Hello and welcome to Malicious Life, I am Ran Levi. Those who have experienced a cyber crime must also know these feelings of frustration. A hacker could be halfway across the world when they attack you, and you might have no way of figuring that out, or catching them even if you could. So frustrating.
But is there really nothing we can do? Are we really so helpless against cybercrime? Don’t be so sure.
In early 2011, Georgia’s Computer Emergency Response Team (CERT) discovered an attempted cyber attack against government and military officials in their country. Someone broke into Georgian news sites and planted malicious code in them: an attack known as a Drive By Download. The code was injected into carefully selected pages: mostly articles about the Georgian army and its links with NATO. While users read these articles, a malware installed on his or her computer, scanned the hard disk for documents containing the word NATO, and if such documents were found, sent them to its operators.
The Georgians had no doubt that this was a targeted attack. Their investigation found that only about 390 computers were affected by the malware, and about 70 percent of them belonged to Ukrainian politicians, officers and government officials. The Georgians also had no doubt about who was behind this espionage attempt: Russia. Russia and Georgia share a complex history, and only a few years earlier, in 2008, they fought a brief but bloody war.
Although they were sure of the identity of the attackers, the Georgians had nothing to do about it. Diplomatic relations with Russia were frigid, and even if the identity of the hacker or the group behind the attack was discovered, the Russians would not hand them over. In fact, Russian law prohibits the extradition of Russian citizens to foreign countries. I can almost feel for myself the helplessness and frustration of these Georgians.
But then the Georgian CERT decided to do something unusual. They set up a lab computer infected with the Russian malicious code, and planted in it a file called “NATO-Gerogia agreement.zip”. As the researchers had hoped, the malware discovered the interesting file and sent it to its operators. But as you might have already guessed, instead of a juicy document about military cooperation, the zip file contained malicious software. When the Russian hacker opened the file on his computer, the Georgian software took over. CERT researchers scanned the Russian hacker’s computer, took screenshots and even watched the hacker in real time as he was writing malicious code and corresponding with other hackers. He likes to use Total Commander, if you’re curious about that kind of stuff. They even turned on the computer’s webcam and took photos and videos of the man sitting in front of his monitor.
In October 2012, the Georgian researchers combined all the information about the Russian hacker into a detailed 27-page PDF document and published it on the Internet. The document contained all the identifying details that the Georgians had managed to glean about the hacker – including his pictures – and his clear connections to the Russian government. This kind of exposure is usually referred to as Doxing.
Why did the Georgians dox the Russian hacker? The answer is quite clear. The Georgians knew that they had no way of putting their hands on him, so they decided to take a different course of action, one that we all know quite well in social networks: shaming. If they couldn’t get the hacker or Russian government to pay for their actions, at the very least they could shame them publicly and possibly cause other countries to be suspicious of Russia in the future. I have no doubt that this doxing has reduced, at least slightly, the sense of insult and frustration caused by the break-in.
So maybe there’s something to be done against cyber criminals and malicious hackers. Perhaps organizations and companies do not have to sit idly by, building complicated defenses around themselves and hope they will not be attacked. Perhaps, as the cliche says, the best defense is offense. Maybe the solution is to Hack Back.
Of course, there’s nothing new under the sun. The hack back is an ancient strategy in technological terms. Before we dive into the debate about this idea and its place in the world of modern information security, though, let’s go back a few decades – to one of the most fascinating historical examples of such a move.
June, 1982, deep in the heart of the Siberian wilderness: “Norad” monitors–those built by the U.S. and Canada to watch for nuclear activity from the USSR–receive a signal indicating a 3-kiloton blast from inside Russia’s borders. Word makes its way to the National Security Council and, immediately, fear sets in. Fear, but also confusion–those same monitors failed to pick up any trace of the electromagnetic pulse that should have accompanied such a launch.
Thomas C. Reed, then vice chairman of the National Commission on Strategic Forces, remembers that day well. In his autobiography, he recounts the warning, the confusion, and the point when a mild-mannered colleague–Gus Weiss–proceeded to share news with his colleagues equally comforting (CAM-FER-TING) and mystifying. “Weiss came down the hall to tell his fellow NSC staffers not to worry,” he writes. “It took him another twenty years to tell me why.”
There’s a reason why only a handful of people on the face of the Earth knew exactly what happened on that day in the summer of 1982, and why it took twenty years for the rest of us to find out. That explosion, it turns out, was the least interesting part of what Ronald Reagan went on to deem the greatest spy story of the century.
When then-President of France, Francois Mitterrand, met with President Reagan at the G-7 Summit in Ottowa on June 19th, 1981, it wasn’t for a chat amongst buddies. Instead, he revealed an intelligence operation–later deemed the “Farewell Dossier”–that would go on to prove one of the finishing blows to Soviet Russia.
The subject of this discussion was a man named Vladimir Vetrov. Nearing 50, Vetrov spent much of his career as a ‘Line X’ officer in the KGB’s ‘Directorate T’ program, which was responsible for stealing technology secrets from the West. Having moved up in the organization, he was now the man in charge of evaluating all Directorate T intel, and passing it onto the proper authorities–in other words, a particularly effective position for a potential defector to cause major damage.
Suddenly, a president whose legacy would be defined by his success in finishing the USSR had the names of “more than 200 Line X officers stationed in 10 KGB rezidents in the West, along with more than 100 leads to Line X recruitments,” as well as some 4,000 highly classified documents outlining Russia’s master strategy for keeping pace with the U.S. during the Cold War.
What the documents revealed was staggering: the U.S. had effectively been fighting a Cold War arms race against itself this whole time.
It turned out that Russia’s technical abilities were years behind America’s, but their spying operations were as advanced as any the world had ever seen. Without detection, they’d successfully planted hundreds of agents throughout North America and Europe, supporting the USSR’s scientific advancements through stolen tech from the West. Most insidious of all was a master plan thought up by Leonid Brezhnev–leader of the USSR at the time–to take advantage of Richard Nixon’s diplomatic “detente”. Detente was a general agreement designed to ease tensions between the two powers by encouraging greater discourse, nuclear negotiations and joint research projects, but the KGB used this openness as an opportunity to plant hundreds of Line X officers in supposedly “friendly” research teams sent to the U.S. About one-third of a Russian agricultural delegation in the U.S., for instance, was made up of moles. Another particularly cheeky incident involved a Soviet guest to the Boeing company, who applied adhesive to his shoes in order to obtain metal samples.
Vetrov’s intel proved what some feared most. The U.S. Office under the Secretary of Defense released a report that concluded “Western nations [were] thus subsidizing the Soviet military buildup,” and that “targets include[d] defense contractors, manufacturers, foreign trading firms, academic institutions, and electronic databases.” Bad news as it was, this was also the information that could just about take down an already economically crippled, military-dependent USSR. Reagan could now simply purge every Red from within American ranks, depriving them of their main source of intel that had allowed them to keep pace for all these years. Sounds great, right?
Or…America could have a little fun with their sparring partner. So thought Gus Weiss: a modest-by-nature, economist-by-trade policy advisor on the National Security Council, who came up with a more creative plan that would mark one of the single earliest, most significant instances of cyber warfare the world has ever known. His idea, in his own words? “Why not help the Soviets with their shopping? Now that we know what they want, we can help them get it.”
The plan was set: the U.S. would continue feeding information and technology to the Soviets. Of course, that information would be purposely incorrect and misleading: it will pass security tests but fail in operation.
It was a foolproof plan. In revisiting the affair years later, Weiss himself explained the genius of the idea: “If some double agent told the KGB the Americans were alert to Line X and were interfering with their collection by subverting, if not sabotaging, the effort, I believed the United States still could not lose. The Soviets, being a suspicious lot, would be likely to question and reject everything Line X collected. If so, this would be a rarity in the world of espionage, an operation that would succeed even if compromised.”
One instance of this plan’s success: a control system software built for a massive, physical and economic juggernaut of a pipeline running across Russia, “that was to run the pumps, turbines and valves was programmed to go haywire, to reset pump speeds and valve settings to produce pressures far beyond those acceptable to the pipeline joints and welds.” The result, in June, 1982: “the most monumental non-nuclear explosion and fire ever seen from space.”
It’s still a matter of debate as to why Vladimir Vetrov did what he did. Many American sources claim he was driven by ideological motivations. Serguie Kostine, author of one of the few books written about Vetrov, believed it a revenge plot on a corrupt KGB that didn’t properly recognize his talents and achievements. In the end, Vetrov was arrested for an unrelated crime, during the processing of which his espionage activities were discovered, and he was executed in 1983. The CIA instituted protective countermeasures to keep his secrets alive.
But even without Vetrov around anymore, the impact of the Farewell Dossier would remain one of the lesser-known, yet key factors in the demise of the Soviet Union. What went on behind Russian closed doors following the pipeline incident is still largely unknown, but either way, for the rest of its lifespan, the USSR would either be plagued with faulty technology, or massive and unavoidable paranoia of such. Within the decade, helped by a sneaky use of early malware technology, the United States would formally defeat the world’s second-greatest power, a beast of its own creation.
The Farewell Dossier incident is a fairly early example of a hack back. Seeding intentional failures in engineering documents is probably a rather complex matter – but as Yonatan Striem Amit, Chief Technology Officer & Co Founder of Cyberason says, nowadays hack back is much simpler.
“[Yonatan] So another option is, there’– another option is called hack back, and there’s a great debate in the security, defenders community on the legitimacy of that concept. What is hack back? Imagine for example that I put on my own machine, a document that I’ll, you know, very deliciously call, you know, My Trade Secrets or whatever –
[Ran] The Secret Recipe for –
[Yonatan] Crab – the Crab Patty, the Secret Recipe for the Crab Patty on my machine, but this document is actually a weaponized exploited document that when run on a legitimate Windows, would, for example, actually exploit a machine to create a remote control, the same kind of tool that hackers are using against us. It doesn’t necessarily need to be a very sophisticated unknown zero-day. It could exploit, you know, some relatively recent – something that has already been patched, but if somebody is not very careful, they may be vulnerable to it.”
If the HackBack is not such a complex technological challenge, we can expect companies to adopt it as common practice – and indeed, there are recent examples to be found.
For example, in August 2017, ProtonMail, an email services company, identified someone trying to phish its customers and send them to a fake login page that mimicked their own. The fake login page was hosted on the servers of a university in Indonesia. ProtonMail employees hacked the servers and deleted the malicious page. In its Twitter account, ProtonMail updated its customers about the phishing campaign, and even tweeted-
“We also hacked the phishing site so the link is down now.”
On the face of it, ProtonMail seems to have done something good for its users: it hacked back against its aggressors, preventing an attack that could have hurt many unsuspecting customers. But if ProtonMail did such a good job, why did the company’s management decide, only a few hours after the announcement, to erase the tweet in question, instruct its employees not to answer questions from journalists and generally behave as if the whole thing had never happened?
There is good reason for that, and it is that under the current law Hacking Back is illegal. The Computer Fraud and Abuse Act explicitly states that hacking into someone else’s computer is a criminal offence. And yes, even if it’s against someone who just broke into your network.
“[Ran] What’s the debate? I mean, it sounds like a great idea potentially.
[Yonatan] Well, there’s always a question of where do you draw the line? How much active do you go against that person, and more importantly, how can you assure that the hacker isn’t using some third party victim in the process here. I’ll give you an example. Suppose the hacker isn’t using – isn’t coming to me through the internet regularly, he’s coming to me through a third party compromise, so he’s attacking –
[Ran] He took over somebody else’s machine –
[Ran] And he’s using it as an attack vector.
[Yonatan] Exactly. So he’s using somebody else’s machine as an attack vector and a source of attack to exploit you know, my company’s machine. By hacking back on to that intermediary machine, I’m actually committing a crime against an unsuspecting innocent person.
The fact that he is already a victim, doesn’t give me the right to attack him back and make – doesn’t get him to the same position we were sort of get go. So the collateral damage potential in hack back is huge.”
The collateral damage that Yonatan Striem Amit speaks of is one of the toughest problems facing anyone planning a hackback – even if it’s a huge company with enormous resources.
NJrat and NJw0rm are two malware programs that took over millions of Windows computers around the world in 2014. The hackers suspected of operating these botnets were Mohamed Benabdellah and Naser Al Mutairi, both from countries in the Persian Gulf. Microsoft is known for its proactive approach to cybercriminals: over the years, the company has collaborated with law enforcement agencies to take over botnets and in some cases, even offered prizes to those who provide incriminating information about malware authors. It’s no wonder, then, that Microsoft filed a civil suit against Benabdella and Al Mutairi in court.
But Microsoft did not stop there. It asked the court to approve their own takeover of the computer infrastructure that the two used to manage their botnets. This infrastructure belonged to an American company called Vitalwerks Internet Solutions, which does business under the name No-IP. Microsoft has asked the court to authorize routing all network traffic destined for No IP’s servers to its servers – thereby taking control of the bots from Benabdella and Al Mutairi.
Now, all the facts indicate that No IP is one of these companies that walk on the thin line between what’s forbidden and permitted in the world of cybercrime. The company itself does not take any part in cyber criminal activity – but as Microsoft wrote in its affidavit to the court, and as Eugene Kaspersky confirmed in a post on his blog, it was an open secret that many a hacker has made extensive use of No IP’s services. No IP offers a free service called Dynamic DNS, which allows a hacker to frequently change the address of the command & control servers of his botnet, making it very difficult for law enforcement agencies to take over and shut it down. Indeed, as Kasperski pointed out:
“A simple search via the Virustotal scanning engine confirms this fact with a cold hard figure: a total of 4.5 million unique malware samples sprout from No-IP.”
In its official announcement on the company’s blog, Microsoft wrote,
“Our research revealed that out of all Dynamic DNS providers, No-IP domains are used 93 percent of the time for [NJrat and NJw0rm] infections, which are the most prevalent among the 245 different types of malware currently exploiting No-IP domains. Microsoft has seen more than 7.4 million [NJrat and NJw0rm] detections over the past 12 months, which doesn’t account for detections by other anti-virus providers. Despite numerous reports by the security community on No-IP domain abuse, the company has not taken sufficient steps to correct, remedy, prevent or control the abuse or help keep its domains safe from malicious activity.”
The evidence presented by Microsoft convinced the court that No IP cooperates with bots operators – or at least allows them to operate freely. The court allowed Microsoft to seize 23 domains that belonged to No IP and redirect Internet traffic to its servers.
Richard Boscovich, Assistant General Counsel of Microsoft’s Digital Crimes Unit, was understandably satisfied:
“Playing offense against cybercriminals is what drives me and everyone here at the Microsoft Digital Crimes Unit. Today, Microsoft has upped the ante against global cybercrime, taking legal action to clean up malware and help ensure customers stay safer online.”
But the day after the seizure, a slightly less joyful picture emerged. No IP, it turns out, had four million customers – and many of them, perhaps even most of them, were normative and utterly lawful users. It turns out that much like any other technology, Dynamic DNS also has legitimate uses: for example, in connecting security cameras to the Internet, hosting sites on home servers, and running game servers. Microsoft’s takeover of No IP addresses had affected many of these customers without any warning. No IP was flooded with complaints from angry customers, and posted the following statement on its blog –
“We want to update all our loyal customers about the service outages that many of you are experiencing today. It is not a technical issue. This morning, Microsoft served a federal court order and seized 22 of our most commonly used domains because they claimed that some of the subdomains have been abused by creators of malware. We were very surprised by this. We have a long history of proactively working with other companies when cases of alleged malicious activity have been reported to us. Unfortunately, Microsoft never contacted us or asked us to block any subdomains, even though we have an open line of communication with Microsoft corporate executives.”
Whether or not NO IP is really as innocent as it claims – it is quite clear that despite Microsoft’s best intentions, innocent users suffered clear damage, and it is the same sort of collateral damage that I mentioned earlier- because of which the law explicitly prohibits hack back operations. Just nine days after capturing the No IP’s domain names, Microsoft was forced to make a public apology to No IP and its customers and ret urn the addresses it had seized.
The bottom line is, than, that hackback is dangerous for the same reason that a gunfight between police officers and criminals is dangerous for innocent bystanders passing by in the street. In addition, those who choose such an active defense put themselves at a legal risk since, under our existing laws, hacking a computer is an offense, no matter the motive. That is why Sam Curry, Cybereason’s Chief Product Officer, is not enthusiastic about that strategy, to say the least.
“[Sam] Let’s be clear, there’s a cold war online in a very cyber turbulent space and it is multipolar. And if we start to encourage hack back, then every nation – not only every nation, but every company, every organization, and private armies will start to emerge. That becomes a world that, frankly, could threaten it for all of us.
So I don’t want to see a world emerge where everyone feels, you know, it’s like a Mexican standoff, everyone’s got massive fire power pointed at everyone and someone pulls the trigger. That’s not a good world to do business in and live in, and improve the human condition. So I would like to say that we need to have our laws, have a general – just as we don’t approve the vigilantism, I don’t believe personally that we should condone hack back by companies. No company is going to stand up against the nation state and having our multinationals bang it out with each other is a very daunting thing.”
Or as one commentator wrote in Bruce Schneier’s blog –
“Hacking back is like claiming that the problem with the Wild West in the 1800s was that it simply wasn’t wild enough.”
But despite these fears, hackbacks are a temptation that many organizations are unable to resist. No one likes to sit and wait for someone else to attack them; it is almost against human nature. Michael Sulmeyer, the Belfer Center’s Cyber Security project director at the Harvard Kennedy School told a reporter for The Daily Beast-
“I’m pretty convinced that the current arrangement of merely asserting that hacking back is illegal, so therefore do nothing—I do not think that is a sustainable approach. Something’s going to have to change.”
There are a great number of people who agree with these words. Adding to the effect, the NSA and the CIA are usually in no hurry to come to the aid of small and medium-sized businesses since, understandably, they are reluctant to expose their sophisticated cyber tools just to solve a business crime. That’s why there are several companies that offer hack back services: hack the hackers to delete stolen sensitive information, or launch DDoS attacks to slow or deter attackers. It’s hard to know how widespread this strategy really is, because hardly anyone will admit to using it.
That is the background over which Republican Representative Tom Graves from Georgia proposed, in 2017, a bill to Congress called the Active Cyber Defense Certification Act – AKA ACDC – that would allow victims of a cyber attack to attack back without legal consequences. This move comes in the wake of a growing demand by many companies and organizations that the US government do something to help them against the growing tsunami of cyber industrial espionage: particularly from Chinese and Russian hackers stealing American companies’ trade secrets and selling them to their foreign competitors.
Tom Graves’ bill is still in the process of legislation, and so we still have some time before we know for sure if the ACDC act is a good idea, or as the song goes – just another highway to hell. Pun intended. Yeah, my editor says I’ve reached a new level of corny, and he’s probably right.
And perhaps there is also a middle way. One of the proposals raised in recent years is to establish a government body that will be available to American businesses, and will have the authority to execute a hack back on their behalf. For example, a company that detects a hack into its network can contact this body and ask it to conduct an investigation – which will also include a hack back to the attackers’ computers if necessary. If the investigation points to clear culprits, the US government will be able to decide on economic sanctions against those accused of industrial espionage – thereby charging them with the full price for the crimes they committed, as opposed to what is happening today. Of course, this governmental response would probably be agonizingly slow, in cyber security terms, but this hypothetical body would operate with maximum transparency and with open source and civilian tools, so that it would be completely separate from the government’s cyber-intelligence bodies. It could, in theory, even be financed by the funds of civilian companies that want such a defensive umbrella. Such an organization could therefore provide a market function, like a private company would, while remaining within the legal boundaries of the U.S. government. Yet it remains a far-out idea today, so we’ll have to wait and see if such a proposal could really work.
Will the hack back become a legitimate tool in the world of information security, and perhaps even a government service? Time will tell. What is certain is that Retaliation is a complex strategy in cyber security, full of pitfalls and traps – both legally and ethically.