Andrew Ginter: A 40-Year-Old Backdoor [ML B-Side]

Ken Thompson is a legendary computer scientist who also made a seminal contribution to computer security in 1983 when he described a nifty hack that could allow an attacker to plant almost undetectable malicious code inside a C compiler. Surprisingly, it turns out a very similar hack was also used in the SolarWinds attack.

Hosted By

Ran Levi

Exec. Editor @ PI Media

Born in Israel in 1975, Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.
In 2007, created the popular Israeli podcast, Making History, with over 15 million downloads as of July 2022.
Author of 3 books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.

Special Guest

Andrew Ginter

VP of Industrial Security @ Waterfall Solutions

Andrew Ginter is VP of Industrial Securit at Waterfall Security Solutions. He has managed the development of commercial products for computer networking, industrial control systems and industrial cyber security for vendors such as Develcon, Agilent Technologies and Industrial Defender.

Episode Transcript:

Transcription edited by Suki T

[Andrew] We now have malware, and there’s no evidence of it anywhere.

[Ran] Hi, and welcome to Malicious Life’s B-Sides. I’m Ran Levy.
Ken Thompson is a legendary computer scientist, and when I say legendary, I’m not exaggerating even one bit. His list of accomplishments in the early days of computing and the sheer number of things he invented that we are still using to this day is mind-blowing. Let’s see, he created Unix with Dennis Ritchie, developed the concepts of hierarchical file systems, computer processes, command line interpreter, pipes and device files, he created the B programming language, which Ritchie built off of to create C, and invented regular expressions. Although I’m not sure we should give him any prizes for that one. Ken Thompson is so legendary that in his Turing Award acceptance speech in 1983, which he and Dennis Ritchie received for their work on operating systems, he presented a type of vulnerability, a type of backdoor, which is considered a seminal work in computer security. This 40 years old backdoor will be the focus of our episode today. And if you’re asking yourselves, wait, what, a 40 years old backdoor? Why should I care about a backdoor discovered when even PCs weren’t a thing yet? Then I’ve got just two words for you. SolarWinds.
Our guest today is Andrew Ginter, Vice President of Industrial Security at Waterfall Solutions, author of three books on cybersecurity and the co-host of the Industrial Security Podcast together with our very own Nate Nelson. Full disclosure, the Industrial Security Podcast is also produced by my company, PI Media. If you’re into industrial cybersecurity, go check it out. Here’s Andrew and Nate talking about Thompson’s hack.
Enjoy the episode.

[Nate] Andrew, when did you first hear about the paper that we’re going to be discussing today?

[Andrew] Well, I came across this in my youth. I mean, the paper, you know, if you dig deep into it, the paper was published in 83. But in the original paper, there’s a reference on the bottom saying, this is inspired by another paper that, you know, the author Ken Thompson read years earlier and can’t find anymore. It turns out that paper, you know, other people have dug into it, was written by Paul Karger and Roger Schnell back on the Multics project. And it turns out I was working on the Multics project a long time ago. You know, not while the original research was going on, but what was sort of in a maintenance mode. The paper is about compilers. You know, curiously, I was maintaining Fortran and PL1 compilers on the Multics operating system for the Multics project back in my youth.

[Nate] So if this was written 40 years ago about systems that we don’t really use anymore, why has it stuck around this long or why are we talking about it on a podcast today?

[Andrew] I mean, I was I was dimly aware of it all these years when I reread it. What struck me was how in a sense, how very relevant it was. You know, in again, I do industrial cybersecurity in the critical infrastructure space, even in the IT space. A lot of people today, the hot buzzword for the last 12 months has been supply chain attacks. And I’m looking at this going, you know, this is probably the first widely published description of a supply chain attack. This is the original.

[Nate] Personally, I’ve read this thing like three times now. And admittedly, I still don’t think I 100 percent get it. So, Andrew, can you in as simple terms as possible for listeners out there describe the exact logic of this paper?

[Andrew] Sure. And it’s it’s a little complicated. So let me start at the at the very beginning. We’re talking about compilers. A compiler is a computer program that translates stuff that looks almost like English vaguely into stuff that the machine can understand. So the stuff that looks like English might be something like, you know, let’s say someone’s trying to log in and someone had to write some code somewhere that says, if the password the user entered equals the stored password, then let them log in. Else, print an error message, bad password, you know, and return, give up. So that kind of sort of pseudo English logic is what programming languages look like.
And, you know, Ken Thompson and Dennis Ritchie invented the C programming language, which is a predecessor of C++. Both of them are still in widespread use. But, you know, a compiler is a program that converts, let’s call it the C programming language into machine code. And machine code is like really stupid. It’s like add a value to one of the CPU registers, fetch the memory location that that register is the address of, compare the register that you fetched. It’s really low level stuff. It’s almost impossible to understand. So, you know, this is why people write programs and what are called high level languages. C is not even a very high level language. But this is the example. The first concept you got to wrap your head around is a compiler translates English like stuff into machine language.

[Nate] Fine. So now that we’ve got compilers down, tell me about the actual hack.

[Andrew] The first example of the hack that he gave was he said, OK, imagine that you have, I mean, these compilers, the C programming language is pseudo English. The compiler translates C into machine code. The compiler itself has to be written in a language. And what, you know, Ken and Dennis did was they initially had to write the C compiler in a different language. But once they got enough of that compiler running, they translated the original into C. And now the C compiler is itself written in the C programming language. So if you want to add features to the programming language, you change the source code, you use the old compiler to recompile the new source code.
And then you’ve got a more powerful compiler and you can iterate like this, making the compiler more and more powerful over time. So the key concept here is that the C compiler is itself written in the C programming language. And here’s what they said. Here’s the hack. They said, OK, imagine you’ve got the source code for the C compiler sitting in front of you and you reach into the C compiler and you find where it’s compiling a function. And if you see that it’s compiling a function, the login function in the login.c file, which is the login command that the Unix operating system used to log people in, then insert this code instead of saying if the user’s password equals the stored password, say if the user’s password equals the stored password or the user’s password equals super secret backdoor, then log the user in, else print bad password. So they’re saying insert that code into the compiler and compile that. So now you’ve got a new executable of the compiler. And the executable, when it compiles the login command, inserts this backdoor.
Now, if you know the super secret backdoor password, you can log in as anybody. You give anybody’s username and it says, what’s the password? And you say super secret backdoor and it lets you in. Here’s the thing. If you look at the source code for the login command, you don’t see the backdoor. It’s in the binary. It’s in the executable because the compiler put it there. But the source code does not contain the backdoor.

[Nate] OK, so we have a backdoor built into the compiler. It’s now in the executable. Evidence of it isn’t in the executable source code, but it would theoretically be in the source code of the compiler.

[Andrew] That’s right. So there’s a backdoor in the compiler. The backdoor has been sort of transferred into the executable for the login command. It’s in the executable. If you actually go and read the instructions in the login command, try and decode the machine instructions, you can actually see it there. But nobody does that. It’s really hard to do that.

[Nate] OK, so we’ve got software. It’s got a backdoor. You just can’t find that backdoor in the source code.

[Andrew] That’s right. And then it gets complicated.

[Nate] For a second there, I thought that it was going to be too easy. Yes. OK, so Ken said, OK, you can still see the backdoor on the C compiler. If you know where to look, you see the backdoor. So how about this? He said, let’s add more code to the C compiler that says when the C compiler is compiling itself, because the C compiler is written in C. You use the C compiler to compile itself when you change the programming language. So let’s add some code to the compiler that says when I’m compiling myself and I see the function that says compile a function, insert the code that says if it’s the login function, blah, blah, blah, and insert the code that says if it’s the compile the function function, insert myself.
So now you’ve got code in the C compiler that says when I recompile the C compiler, put myself into the C compiler. And if the C compiler compiles the login code, put the backdoor into the login code. OK, that seems a little excessive. Why are we doing all that? He says, OK, here’s the punchline. Go back to the C compiler source code. So you’ve compiled that source code. You’ve got the nasty in the C compiler, and every time it compiles the login command, it inserts the nasty into the login command. Go back to the C compiler and erase the nasty. The source code for all that stuff is gone from the C compiler. Recompile the C compiler with the nasty C compiler. The nasty C compiler will compile that source code and say, oh, that’s a C compiler, and it will insert the nasty into the executable. Now you’ve got the C compiler source code does not have the nasty in it, does not have the backdoor, does not have anything. It’s just a C compiler. The login source code does not have a nasty in it. It’s just the login source code. But every time you produce the login executable, it’s got the backdoor because the C compiler sticks it in there, the executable for the C compiler. And every time you recompile the C compiler, this nasty sticks itself back into the executable for the new C compiler executable.

[Nate] But if the evidence of the backdoor isn’t in the executable code and it’s not in the compiler code, then where is the evidence of it?

[Andrew] It’s completely invisible if you look at the source code. The only way to see it is to look at the machine code for the C compiler. And this is like megabytes of machine code. Ken bless his heart said, you know, nobody picks to the machine code, you know, binary by binary. What they do is they use a tool. It’s called a disassembler. It says, take the machine code and turn it back into something vaguely like English.

[Nate] And that doesn’t work because?

[Andrew] Someone had to compile the disassembler too. While you’re at it, stick some code into the C compiler to say if I’m compiling the disassembler and I’m disassembling the C compiler, then don’t show any of this nasty stuff.

[Ran] OK, I admit it took me a good few minutes of hard thinking to wrap my head around the hack that Andrew just described. So just in case any of you, like myself, didn’t quite get it the first time around, here’s another way of explaining the same idea.
The whole premise of this hack revolves around the notion of self-replication, machines that can create new copies of themselves. A C compiler compiling a new C compiler is one such self-replicating machine, you might say. Consider a 3D printer that takes in a drawing and outputs an object. The 3D printer is also a self-replicating machine. You could, in theory, provide the printer with a drawing that tells it how to print a new 3D printer.
Say that I have such a printer. I open its back cover and attach a tiny module that sprays whatever the machine prints with, say, red paint. That’s our malicious code. Now whatever is getting printed will be sprayed with red paint. Since the 3D printer is a self-replicating machine, I want its future generations to also have that red paint module. So I take the drawing that specifies how to build a 3D printer and add to it a drawing of the new module. Now every new 3D printer created will have the malicious module built into it. But here’s the problem. Anyone who reads the drawing can easily see that I’ve added a malicious module to it.
So here’s Ken Thompson’s idea. You take the modified drawing and add to it another addition, a new second module which we’ll call the self-replication detection model. This new module’s function is to detect if the printer is in self-replication mode, that is, a new printer is being printed, and if it is, print it with the two new modules already built into it, the red paint module and the self-replication detection module.
Got it? Whenever such a malicious printer is in self-replication mode, it will always create a new malicious printer. Feed the new drawing to a normal non-malicious 3D printer and voila, you’ve got a new malicious 3D printer. Lastly, destroy the modified new drawing and leave only the original unmodified drawing that shows how to create a new 3D printer. What we have now are a perfectly fine drawing of a 3D printer and a malicious printer. What would happen if someone was to feed that drawing into the printer? Well, a new printer will be created, but, and this is the crucial part, the new printer will have in it the extra two malicious modules because that’s what a malicious 3D printer does. If it detects that it’s in self-replication mode, it prints a malicious printer.
So, it’s a new malicious printer created from a non-malicious drawing and the only way to know that the printer is malicious is to open the back cover and look for the extra modules. In our example, that’s analogous to digging into the binary code of an executable, which nobody ever does. So, so smart.

[AD] The best strategy for organizations to avoid becoming a victim of ransomware is to prevent the attack from being successful in the first place. Cybereason remains undefeated in the fight against ransomware because it moved beyond alerting to deliver an operation-centric approach that detects and prevents ransomware attacks at the earliest stages of initial ingress and lateral movement. The Cybereason predictive response capability disrupts ransomware attacks prior to data exfiltration and long before the ransomware payload can be delivered. Visit Cybereason.com to learn more about predictive ransomware protection and how your organization can realize both increased efficiency and efficacy through an operation-centric approach to security operations. Also, be sure to check out Cybereason’s Black Hat booth number 1820 in Las Vegas for a chance to get an awesomely designed Malicious Life t-shirt.

[Nate] Alright, that was a lot. Could you boil down the main point here? What did we just accomplish?

[Andrew] We now have malware sitting in the C compiler and there’s no evidence of it anywhere. This was why the original paper was written on the Multics because Multics was designed for the military. It was designed to be a military-grade time-shared operating system where if you got classified information in the operating system, in the machine somewhere, it was as close to physically impossible as possible to leak classified information to non-classified recipients. But here, Unix, shoot, you log in and it’s all over. So this was the thing. It was in the context of, oh shoot, if we have a nation-state coming after us doing something really nasty to us, how are we going to tell?

[Nate] How are we going to tell? What’s the answer? Is this like an unstoppable hack?

[Andrew] So 30 years down the road, there was a paper that came out and said, you know, there is a way to detect this, but it’s tricky. You actually need two different compilers. One compiler is the one that you’re testing and you have to assume that somebody somewhere in the world has written another C compiler. And you do something very technical. You compile the original, let’s say you’re a suspect, the one that you want to prove is clean. You take the original suspect source code and you compile it with both compilers, assuming that at most one of them is going to be compromised. And the thing is these compilers, they compile things slightly differently.
So what they said was, let’s compile the suspect source code with both compilers and get two different results. But the two different results are two different machine interpretations of the same source code. So they might be two different sets of machine instructions, but they do the same thing. Instead of saying first load, then add, then compare, they might say first add, then load, then compare, but they accomplish the same goal. So now you take these two outputs and use them both again to compile the suspect source code. And they should produce exactly the same result. Compare the results. Oh, look, there’s 8,000 more instructions in one than the other. What’s going on here? And now you can dig into it or they’re identical and you’re good. So for that one specific case, yeah, there was a publication 20 years, 30 years later saying, you know, we can actually do this.

[Nate] Is this a realistic scenario or just a fun thought experiment?

[Andrew] Well, this is in a sense a fragile exploit, a fragile piece of malware because it assumes a great deal. It understands a great deal about how the original source code looked. So if I rewrote, let’s say I was unhappy with the login command because it was full of bugs and I rewrote the login command. And instead of a function called login, I had a function called login-main-function. I just changed a few names. The compiler would look at that new source code and would not recognize where to insert the nasty anymore. So is it realistic? Maybe not. But in the modern world, we do have these supply chain attacks that everybody’s talking about that are kind of like this.

[Nate] And that’s kind of the point of this episode, right? What are the implications of this paper, of this thought experiment for trust in cybersecurity in general, for supply chain risk for any of the software that any of us use today? Can I fully trust any software that I didn’t code 100% full through by myself if something like this could theoretically happen?

[Andrew] This is a really hard question to answer. The short answer is no, it’s hard to trust anything. This is, you know, it’s all about trusting trust. Do you trust the people who wrote the compiler? Do you trust the people maintaining the compiler? Do you trust? Who do you trust? How much do you trust them? And the thing is, it’s complicated. In the modern world, you know, what hit the news 12 months ago was SolarWinds. SolarWinds is a company that produces a whole family of IT security and IT management products. Their Orion product.
Orion is one of their flagships. It’s a product that is used to manage firewalls. So it’s managing security. It’s a really important product. The Orion product, it turned out, you know, the good guys discovered a piece of malware had been inserted into the Orion product, into a security update for the Orion product. It took nine months to discover this. OK, this piece of malware was out there being installed. Every time somebody installed the latest security update for Orion, they were installing this malware on their system. They figured, you know, somewhere between 17 and 18,000 victims may have installed the malware. And so they dug into this, said, well, where did this come from? And they looked at the source code for the Orion product and there was nothing there. The nasty was not there. How did it get into the product? You know, it starts people thinking about this, you know, the C compiler nonsense here. It turned out that it was inserted into the product as part of the build process.

[Nate] And just to clarify, what is a build process?

[Andrew] It’s the process of turning source code into product. Back in the old days with, you know, a C compiler, if you want to change source code into product, you compile it. Any questions? Nowadays, well, you compile it, but there’s libraries. Your product might be big enough that you actually have 17 executables and nine libraries as part of your one product. It’s not surprising that you were, you know, the bad guys were able to insert a binary, the malware into the build process silently because these things are so complicated. Nobody understands it. Nobody’s watching these things.

[Nate] Basically, then SolarWinds is kind of like an abstraction of the kind of thing that Ken Thompson was warning about, right? That that specific kind of problem may manifest differently now, but it’s problems still nonetheless.

[Andrew] You know, the fundamental problem is that, you know, a lot of people in the supply chain space, they’re out there busily auditing their suppliers saying, are you a trustworthy supplier? Yeah, well, I’m sorry, SolarWinds is a trustworthy supplier. They produce a lot of product. They’ve got a lot of security systems and somebody really good snuck in there silently and, you know, and did this to them. So, you know, to me, what I’m seeing is that people are asking the wrong question. They’re asking, is my supplier trustworthy? They should be asking, should I trust the supplier’s website? Should I trust these very complex artifacts being produced by a trustworthy supplier? How do I know if anything nasty is buried in there? It’s really hard.

[Nate] I’m with you there, but post SolarWinds, right? It’s been over a year now. All these cybersecurity companies out there, highly capable, have been building all kinds of new platforms, new tools to address this very problem because it’s so hot.

[Andrew] None of it’s enough. I don’t think any of that is enough to prevent another breach like the kind of breach that hit SolarWinds. And so, you know, the question becomes, what do we do? What do we do about this risk? So, you know, what people are doing depends on how, in a sense, how at risk they are. You know, at home, on my PC, what are the consequences of me getting compromised? Well, I keep backups. But businesses sort of have a different risk profile. If somebody steals all of the customer names from a business, if somebody steals from a government, very bad things can happen. And so businesses take more steps. You know, they have a whole security program. They don’t just do backups. They are monitoring for intrusions.
You know, a lot of the discussion around the SolarWinds breach said, why did it take nine months to discover this malware? Because the bad guys were actively using the malware in something like 200 victim organizations. And finally, the FireEye organization, they’re sort of gurus for catching this stuff, caught it and said, what’s going on here? Nine months. And there’s a lot of talk about how do we shorten that? How do we make it so that the bad guys don’t, the next time, don’t have as much opportunity to steal information from us?

[Nate] And we could talk about reducing nine months to three months or one month, one week, whatever it is. But Andrew, I’m talking to you, you work in critical industries, the kinds of industries that have no room for error, electrical grids, water treatment, because they support the life that we live. How do these kinds of industries approach this kind of issue, keeping in mind that they have no room for flexibility?

[Andrew] You know, it’s not a question of, are you going to steal the settings from my power plant? It’s a question of, are you going to turn the lights out? Are you going to damage the equipment in the power plant so badly that the lights can’t come on again? And, you know, these very cautious organizations, yes, they do backups. Yes, they do monitoring. Here’s one thing they do. They do not apply security updates instantaneously for two reasons. One is updates are complicated and the cure might be as bad as the disease. The update might crash the plant, you know, just like the bad guys might crash the plant. So they test the updates for a long time. Here’s something they’re doing nowadays. In addition to testing the update, they’re watching other people who’ve installed these updates out in the world to see if any of them leap up in dismay saying, oh, no, I’ve been hacked. There’s malware here. And so they just take a longer time before they trust anything. It’s all about trust. Now, what this means is that they’re more exposed.
Now, they’ve always been more exposed because it takes time to install these updates. They’ve always been known that the bad guys know how to get through firewalls. They know how to avoid a lot of security. And so they put a lot of security in place. They put a lot extra. These critical infrastructures put extra security in place. This is the business I’m in. I help these people. Yes, they have firewalls. They have unidirectional gateways, which are even stronger than firewalls. They have application control, which is like antivirus on steroids. They do a lot of stuff.

[Nate] And is all that enough?

[Andrew] Well, one of the truisms of cybersecurity is that nothing is ever enough to defeat everything. You’re never completely secure. Is it secure enough is an open question. It’s the topic of my next book. How can you tell when you’ve got enough? Because that’s a question of risk management. So, yeah, there’s solutions out there and the less acceptable your consequences, the more serious the consequences, the more you have to do.