Gene Spafford on the Morris Worm & Cyber-security in the 1980's

Eugene Spafford (aka Spaf), a professor of computer science at Purdue University, was the first researcher to publish a detailed analysis of the infamous Morris Worm. Gene talks to Ran about this incident, as well as how was security different in the 1980's.

Hosted By

Ran Levi

Born in Israel in 1975, Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.
In 2007, created the popular Israeli podcast, Making History, with over 12 million downloads as of Oct. 2018.
Author of 3 books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.

Special Guest

Gene (Spaf) Spafford

Professor of computer science at Purdue University

Leader in information security research and policy. One of the most senior people in academia in infosec, with over 30 years experience.

Founder of the first multidisciplinary infosec graduate degree program (at Purdue), and of the largest academic center in this arena.

Consultant to national agenices and leaders, major corporations, and professional organizations.

Author, lecturer and teacher.

Episode transcript:

Ran Levi: Hello, Mr. Spafford. Hi. Please introduce yourself to the audience.

Eugene Spafford: My name is Eugene Spafford. I’m a Professor of Computer Sciences at Purdue University.

Ran Levi: And tell us a bit about your background in computer security. When did you become involved in that field?

Eugene Spafford: Well, really my first exposure to computer security was like a lot of other people where I would be playing with a system to try to find ways around it and I was involved then as a systems administrator where I had to find ways to protect it. Both of those occurred when I was an undergraduate.

Ran Levi: What time period are we talking about?

Eugene Spafford: This would have been in the late 1970s. So by my reckoning, I have been involved in the field for almost 40 years now.

Ran Levi: When you studied in the – say late 70’s were viruses and – you know, the early concept of virus. Was it already familiar to you or was it unknown back then?

Eugene Spafford: No, there was really no such thing as a computer virus. That term didn’t come into use for another 10 years. I had actually read Shockwave Rider by John Brunner, possibly shortly after it came out, I think. I don’t recall the exact year. So I was familiar with that concept and some of the things that I worked with in the computer system involves self-replicating code.

So I knew about that, but nothing on the order of real computer viruses.

Ran Levi: And we are still talking about the main thing, computers probably in that time.

Eugene Spafford: Mainframe and mini frame. They were just beginning to see some of the smaller oriented systems. The systems at the end of the 70’s that I was using were HP 3000 level systems and a series of computers made by Prime Computer Corporation.

Ran Levi: So you say that you started with working on computers back then. When did you become interested in information security per se?

Eugene Spafford: Well, that was actually the work in security was in the late 70’s. My first exposure to computing was learning FORTRAN on an old Burroughs machine and I actually learned how to program with plug boards in the early 70’s from my uncle who taught data processing at high school in the city where I grew up.

But I really got interested in security kind of as a hobbyist in the 70’s. At that time, I hadn’t really planned on going on for a PhD. I had sort of indistinct goals as many people do. But I did really well as an undergraduate and in fact Prime Computer, the company I was interested in working for at the time, they counseled me that I would have a better opportunity if I went on and got a master’s.

So I applied to several schools and got a fellowship to Georgia Tech where they also had Prime Computers and a number of interesting research projects. So I ended up going there and discovered that I really liked some of the research. I was good at it and I really enjoyed the teaching. So I ended up deciding to stay for my PhD, continue to work in security part-time. I supported myself part-time with a research assistanceship doing some systems administration work and doing consulting out in the community.

I did some security work in programming for a couple of local hospitals, credit union, a few things like that.

Ran Levi: Were you aware of the work done by Fred Cohen back in the early 80s about the first antivirus technologies?

Eugene Spafford: I was aware of viruses from some of the early postings that were being made on some of the news groups in ’84, ’85. The Elk Cloner virus was one that actually I saw an earlier report of. Yeah, Rich Skrenta had written and the early Brain virus, early report of that. I was an early follower of the virus and was interested in the material there. When Fred’s work came out and was first published in the Journal of Computers and Security, I read that and thought it was really interesting because up until that point, almost everything that I had done in security as more applied work, I’ve been told was not of academic interest. I didn’t do my PhD in security.

At that time, security was really dominated by cryptography and formal proofs of correctness. Those were really the two things that were being done by and large. Applying security and particularly the whole idea of smaller systems needing security hadn’t really been looked at much out in the general world, commercial world or otherwise.

Ran Levi: Why is that? Why wasn’t security considered a field that should get the respect that we have for it today?

Eugene Spafford: Well, there were a number of reasons. Part of it was the in the 60’s, 70’s and 80’s, the computing landscape was largely dominated by mainframe computing and it was only large organizations that could afford those.

So they had their own internal mechanisms, their own auditors, their own sets of concerns and really up until late 70’s, one of the concerns was correctness. So you had EDP auditors who would actually manually do many of the same calculations the computer did to make sure the computer was doing them correctly. They didn’t trust it entirely.

Ran Levi: It’s like a boring job.

Eugene Spafford: There also was very little in the way of networks that people saw in the same way. The government agencies that used computers were primarily concerned with leakage across different classification levels on a system, not on a network. They had access control because they vetted everyone who came in the buildings.

So it’s a very different environment. During the 70’s, there had been work done on security architecture, security testing. But formal proofs had shown up that testing could never find all security vulnerabilities. So the government at the time had the attitude of we can allow zero opportunity for misbehavior of our systems. Effectively abandoned all funding in that area, all attention and focus everything on formal methods.

That didn’t result in anything immediately for security. So the environment was one where there were some gaps as we began to see more portable desk side systems and then the personal computers arrive on the scene.

Ran Levi: And let’s say that in the – during the 1980s, viruses for personal computers such as the one you mentioned already, the Brain Virus, the Elk Cloner, these got more and more sophisticated and more widespread. But up to the Morris Worm incident in 1988, was there a kind of – was there an antivirus industry in a sense? Were people thinking more in the commercial side, not as we said in the academics side? Was there any feeling that there should be products in security?

Eugene Spafford: Not for antivirus. The first two viruses that were really out in a while that people got any concern with were in ’85 and those were for MS DOS and they largely showed up through file sharing kinds of sites. There were two of them. There were five in ’86, five in ’87, news ones; ’88 saw 12 and then there were about 25 in ’89.

Ran Levi: So it’s still small numbers.

Eugene Spafford: Very small numbers. Antivirus then was mostly from hobbyists, people in the field who came up with things. Ross Greenberg’s Flushot was one of the earliest ones that was designed to stop several kinds of viruses and people seldom hear about that anymore because it didn’t really turn into something in the industry. But there were several small homebrewed solutions like that.

It wasn’t until really – we began to see more with networking, more with downloading and PCs have really become something in the business environment that commercial antivirus came along.

Ran Levi: Let’s talk about the networking environment into which the Morris Worm appeared. Back then, there was the internet or what was later to become the internet as we know it today. But it was much more restricted to government organizations and academic than it is today, right?

Eugene Spafford: Effectively. Also some large companies. No commercial use was allowed. There were really two sets of networks that were out there. There was a defense-related set of networks that were using government and for government purposes and then there was a research-oriented aspect of networking that was in use at – as you said universities. Many companies like Digital Equipment Corporation, IBM and so on.

There were actually several networks using different protocols. The ones that I was on in the late 80s were the early version of the NSFNET, which then became the internet. But there were others. NASA had something called the SPAN network that ran on IBM type or – excuse me, digital type computers. IBM had the BITNET and each of those different protocols, they weren’t really connected together. It had different user sets to them.

Ran Levi: So when we are talking about the Morris Worm, which network are we talking about? That was the academia-oriented, the commercial academia-oriented network?

Eugene Spafford: Yes, it was the one that was effectively the Internet Protocol Network.

Ran Levi: How many computers were probably – it’s always impossible to really assess. But in a sense, how many computers are we talking about? Hundreds, thousands, millions?

Eugene Spafford: It was in the tens of thousands. It was probably under 100,000. One estimate that was widely quoted at the time said 60,000 machine were kind of put together. Long-haul networks, some were temporary dial-up modem, IP over a modem, some dedicated, and then in the institutions, they generally had Ethernet set up among individual systems. Usually a three-megabit per second Ethernet, if that’s going back away.

Ran Levi: OK. So let’s talk about Robert Morris himself. Tell me about his incident. What was he trying to do and how did the worm begin?

Eugene Spafford: Well, I’ve never had a conversation with him directly to hear this. So the reports I’ve gotten secondhand from reports and others who talk to him, some of this testimony, is that he was alternately either trying to show what could be done with computers or demonstrated vulnerability or other motives that are less certain. It’s known that the code had characteristics in it to try to hide itself, to try to keep itself established, even if it was eradicated from some machines.

So it was intended to be stealthy and maintained persistence. So if he was trying to demonstrate something, it’s not clear what he was trying to demonstrate with that in terms of motive. But he wrote this program over a period of time. He had accumulated several vulnerabilities that were in the software at the time, that he built in programmed into the code and then one day, he released it to the network at large.

Ran Levi: Do we know what was he trying to achieve with his worm?

Eugene Spafford: Again, yeah, I don’t know. What people said, it’s possibly to see how far it would spread or how many machines it would get to. The problems with the code that made it particularly observable and problematic, he had bugs in the code about how it recognized whether or not a machine was infected and whether or not it would re-infect the – re-instantiate itself. Infection may not be the best word. But recreate itself on some machines and so the load skyrocketed on some systems, caused some to crash, others to become unresponsive. It generated a lot of network traffic that saturated the networks at the time. So it became very observable within short order.

Ran Levi: So let’s talk a bit about this mechanism of trying to not – to re-infect the machine, connected and already infected. So we are talking about the worm attempting to re-infect a machine that was already infected, right?

Eugene Spafford: Yes.

Ran Levi: With the worm. Why was that happening? You can talk deep into the technical details of this if you want to.

Eugene Spafford: Well, the – again, the motivation is not entirely certainly. The code was set so that it had an answer back and this is something they still use in a lot of malware today that had a sign, a tell that it would create a connection to the – a remote machine is a potential vector to infect. If the machine was already infected, it would not propagate itself usually. But as I recall, I think it was a one in seven chance it would ignore that and it would continue on.

That’s very possibly that it was built in, so that he may not have been confident that his signal wasn’t naturally-occurring on other machines or maybe he was thinking ahead that people might try to set that signal, to keep it from propagating.

Ran Levi: It’s kind of evading the infection.

Eugene Spafford: Yes.

Ran Levi: Pretending to be infected already.

Eugene Spafford: Yes, which turned out to be one of the defensive measures. But what happened is that because this executed fairly fast and many machines eventually became involved, vulnerable machine were being contacted repeatedly and that one in seven could have just – have turned into every other one effectively because it just constantly hammered machines repeatedly, creating copies of itself.

Ran Levi: That was also the load, the heavy loads on computers, right?

Eugene Spafford: Yes, yes.

Ran Levi: When was the first time you’ve heard about the worm and maybe saw its effect personally?

Eugene Spafford: Well, this was released on the evening of November 2nd and it turned out that November 2nd was my wedding anniversary at the time. So I wasn’t online that evening and gone out to dinner, had a nice evening. The next morning, I got up early and had some coffee and was trying to log in to read my email and one of the machines that I normally used was unresponsive. It was still there. It was still up. But the load had soared into the hundreds, way beyond what was normal.

So I realized something was off. I asked staff. I called them on the phone and asked them to reboot the machine, which they did and very shortly thereafter, it began to slow down and slow down and I managed to do a process snapshot and saw a lot of processes running that shouldn’t have been there, that they were unfamiliar.

So I went into the office and I think I did spend about the next 18 hours straight there in the office doing de-compilation, communication, writing up results.

Ran Levi: And what was kind of the communication between researchers, users on the internet trying to figure out what was going wrong? Was there any kind of communication between the people?

Eugene Spafford: There was some communication. There was some email attempted back and forth. But the program preferentially got to the big servers because those were the best connected and those were the ones that slow down the most. So they weren’t able to connect the email or if the administrators of those machines recognized what was going on, they took them offline.

So some email was badly delayed, didn’t work very well. The community at the time, some of us knew each other outside of online email or Usenet newsgroups which was another mechanism of communication. But we didn’t really have phone numbers or fax numbers or other ways of communicating. So that was actually one of the lessons learned that came out of the incident and led to in part the creation of the CERT CC at the Software Engineering Institute. It was that we needed other means of communication. We need other trusted sources than just the network to network.

Ran Levi: If I remember correctly, there was a convention going on in Berkeley right at the same time I think that was involved in trying to kind of mediate the effect of the worm or maybe trying to fight it. Were you aware of that activity back there?

Eugene Spafford: Well, what was going on at the time was there was a workshop that was being held with a system admins from Berkley, from MIT and a few other places on Berkeley Unix. They were actually having a meeting about the – I believe it was the next release of Berkeley Unix at that point.

So when they noticed the problem, they were all there together and a group of them were able to then sort of sequester themselves and start working on this.

During the day, during the afternoon and the next day, because I knew many of those people although I wasn’t at the meeting, we had communications back and forth about what we had found and methods that could be used to slow down or stop the software.

Ran Levi: What was the reaction from official government contacts or the media? Were they aware of anything going on in the network?

Eugene Spafford: Well, the news got out certainly and it became very newsworthy. So one of the things I had to cope with at a certain point was calls from news media and the university encouraged me to take the calls and fill them in on information.

So that’s one of the things I did. Most of the calls were pretty uninformed. So I ended up putting together a fact sheet that I could fax to them with background. One of the calls for instance was asking about whether this virus would be jump to the user population, which was a fascinating question.

Ran Levi: Ah, you mean kind of turning into a biological virus?

Eugene Spafford: Yes. Yes. They really didn’t have a clue as to how things worked.

Ran Levi: Really?

Eugene Spafford: On the government side, there was a lot of concern because they didn’t know what this was or who it was from and one of the things that I raised at one point with some of them, they had thought about, which was there were thousands of copies of this. But they didn’t know that all those thousands of copies were the same and it’s hard to tell, until you take them apart or compare them somehow.

So it might have been used as camouflage for a more targeted attack. They didn’t know whether this was exploratory, whether it was something real. At that time, the military side of the early internet was still connected. I found out later that one of my friends was the duty officer at the Pentagon, at the room where the bridge was between the internet and the MILNET and he had been given orders that if anything happened, he was supposed to turn a key, open a box and hit a red switch, which would actually cause an explosion in the chassis and physically separate the networks.

Ran Levi: It’s kind of a Hollywood style explosion.

Eugene Spafford: Oh, yeah, very, very. Well, a military thinking at the time. This was positive disconnection. So he was working late at night and the call came in to blow the net. Well, he’s in a small room and he knew if the explosives went off, not only would that damage his hearing, but it would be weeks before they got everything replaced. So he just went over and pulled the plug from the wall.

Ran Levi: Smart guy.

Eugene Spafford: He was never disciplined from that. Yes, he was a computer scientist. So he understood how it worked. But –

Ran Levi: But did it really interrupt military operations anyway? No? It was purely on the academic commercial side of the internet.

Eugene Spafford: Yes, pretty much. Taking that offline just cut down – cut off some of the communication for email. But it wasn’t widely used at the time and within the military system, they could still use the connectivity. So it was not an issue.

Ran Levi: And then you took apart the worm itself. You analyzed the code. What did you find there? What did you think about the code once you analyzed it?

Eugene Spafford: So the first effort on analysis was to try to find ways to stop it. So there were several of us at Purdue who came up with various ways of stopping the worm by looking at files it created and signals it sent.

Then I started taking it apart further to see how it worked, how it spread, what the algorithms were. I wrote a long report on this that turned out to – probably have been the most read technical report ever out of Purdue and still available if anybody wants to find it.

Ran Levi: I read it. It’s fascinating analysis.

Eugene Spafford: It was an interesting experience for me to go through and look at the code. There were some things I found. I found coding errors, because I went through – I found issues such as the binaries that were included were intended for Berkeley Unix and only for – what was then Sun 3 [0:23:13] [Phonetic] architectures. We had a Sun 4 [Phonetic] in the lab that hadn’t been announced yet and whoever had written this hadn’t generated binaries for that or for the digital version of Unix as I recall.

Some of the algorithms terminated incorrectly. They were off by one error, so it was sloppy programming. A search on table of network ideas that had already been done was a linear search for a large table and this showed me as well it’s someone who hadn’t really worked with large data sets or didn’t understand. It turns out later I found out that Mr. Morris, now Dr. Morris, had done his undergraduate at Harvard where they taught introductory courses in LISP, which is a list processing language. So the idea of using a binary search or a hash kind of search was not something that –

Ran Levi: That he was exposed to, yeah.

Eugene Spafford: Yeah.

Ran Levi: If I recall correctly from reading that analysis, your article, it kind of gave me a sense of you or – I mean maybe other people kind of taking the worm on a more personal level as if you were a bit angry of something that – how could something so silly and stupid and obviously created not by a seasoned professional take down so many computers. Is that emotion really there or was it just my imagination?

Eugene Spafford: Oh, I think there was – some people were outraged. I think I was to some extent in part because the network community using systems up until that point was a relatively closed, like-minded community. It wasn’t open to the public at large. There was no commercial data on it. It wasn’t classified. There wasn’t a trade secret. There wasn’t any – there was nothing like an Amazon or eBay or anything like that.

It was all research-oriented and mailing lists and exchanging code and those of us who were maintaining systems freely exchange information about how to protect systems and what to look for. So this code that had been inserted that took down systems, took advantage of flaws, was almost taken personally because it was a small community that somebody had violated the trust. Somebody had violated our systems.

It’s interesting to look back because fairly quickly thereafter, that attitude wore away. It changed.

Ran Levi: That’s exactly what my next question is about. What changed really after the Morris Worm in that respect, in the way people started thinking about security and about the connected environment? Did it change something fundamentally?

Eugene Spafford: Oh, it did change. We saw more instances of network threats of various kinds, various worms that were released. We saw a big uptick in virus writing, as more people took that on.

Ran Levi: But did something change in the mindset of the professionals who are now maintaining systems? Did they start thinking about security differently than before the Morris Worm?

Eugene Spafford: Well, I think I saw two things that came about that were different. One is that those of us who are dealing with security started forming closed, vetted lists, to exchange information and that turned out to be difficult because we didn’t really have that much contact with all of the people out on the network and knowing who you trust and who not to was a real problem.

So that was one of the things that changed. A second thing that changed was awareness. Very shortly thereafter, I was contacted by a major commercial organization, ADAPSO, and they asked me to write a book on viruses for their members to explain good hygiene for systems, how viruses worked and the like.

That ended up being the first English language book on computer viruses and was an interesting experience for me to put together.

Ran Levi: It was intended not for professionals in security I understand. But for a wider population working with computers, right?

Eugene Spafford: Yes, IT professionals and somewhat computer-literate executives. There was tutorial information at different levels in the book along with pointers to resources.

At the time that that was written and came out, which was – I think it was in late ’89, early 90’s. There were 50 or 60 viruses that were out there for not just MS DOS, but for Atari, Apple, Amiga Systems.

So that was a change that people were beginning to get more aware of that and I also found that some of the people in government were now beginning to get interested in this and I was actually offered research funding for some of the things that three or four years before I have been told had no academic interest, had no government interest and what I had been doing as a hobby.

Ran Levi: Now there was money.

Eugene Spafford: Now there was money, yes, and I was able to kick off a couple of my first projects with my students and two are very well-known and the people involved have gone on into the industry. One was the COPS Scanner with Dan Farmer and the other was Tripwire with Gene Kim and I also had a third project at intrusion detection with Sandeep Kumar who was – who got his PhD with me and it started some of the work that was going on in intrusion detection.

So it was a big change for not only the community but for me as well, because it allowed me to take some of the things that I’ve been thinking about and working on and turn them into research projects where I could actually publish papers, get students to build things and build up a program.

Ran Levi: And I guess you weren’t the only security professional now having access to budget, et cetera. It probably changed something in the industry as a whole.

Eugene Spafford: I wasn’t the only one. But I was one of the very, very few who was looking at applied security in this way. There were still people who were doing some more formal methods, who were doing some other kinds of research of that sort.

Another person who was working in this area and more applied security, a bit different than I was, Matt Bishop, who is still on the faculty out at UC Davis and a Purdue grad. He had graduated before I got there as a faculty member.

But I pulled together people into a laboratory setting, a couple of other faculty who had interest in this area. The funding and we had about a dozen students working with us and that started in 1992. So we just celebrated the 25th anniversary of that.

Ran Levi: Congratulations.

Eugene Spafford: And then in 1997, I was asked to testify before Congressional Committee and I did a survey. I looked at other academic institutions. I worked with NSF. I worked with some people in the intelligence community to identify all those schools where anybody was teaching advanced information security and discovered there were only four places in the country that had more than two faculty.

Ran Levi: In 1997.

Eugene Spafford: In 1997. That we were graduating on average three PhDs a year and of those three, one went into the industry, one returned to his or her home country because they were not US citizens and one went into academia. So this was not meeting the need by any –

Ran Levi: It’s amazing to think that it was only 20 years ago.

Eugene Spafford: Yes.

Ran Levi: Not so long ago.

Eugene Spafford: Then in 1998, my laboratory COAST transitioned into a university institute CERIAS. So next year, we’re celebrating the 20th anniversary of that and in those 20 years, we’ve produced a little over 250 PhDs and a little over 400 master’s degrees. A lot of undergrads that we don’t have a good count of. We focused primarily on graduate education for a while, which – up until a few years ago we could actually say was more than the next three schools combined.

So it gave us a really great advantage and 20 years is a major milestone in the field. It isn’t all traceable back to that one incident. But that certainly was one of several things that helped make a difference in moving forward.

Ran Levi: It’s interesting to think about this specific incident. You said already that there were viruses prior to that. It was not the first worm, not the first virus. Yet here we are sitting and talking about the Morris Worm because it somehow got to the – got publicized in a sort of way. What’s different about the Morris Worm that it got so much attention, that specific incident?

Eugene Spafford: The – I think the previous worms that were out were either all within a very small research network or – I’m trying to remember. I think the Christmas Tree worm propagated before. That was on BITNET. That was IBM mail.

So this one was really the first one that covered nationally and hit institutions that were recognizable to many members of the public.

Ran Levi: Not obscure institutions, but large universities.

Eugene Spafford: The local universities.

Ran Levi: Yeah.

Eugene Spafford: Yeah, or the local company. And as a result, that made it newsworthy. It was also something that people have never heard of before. It was sort of science fiction-like. So that drew a lot of attention. It was a mystery. No one knew who had done this or why. That added to the overall aspect of this.

Then there were lots of follow-on. There was the first criminal prosecution. That made some interest. Dr. Morris’ father was Chief Scientist at the National Security Agency. That opened up all kinds of speculation about various interesting conspiracy theories or other kinds of things.

Ran Levi: What conspiracy theories for example?

Eugene Spafford: Well, conspiracy theories are always going and people will find connections even if there aren’t any there. The one that I heard that was sort of varied and frequently spoken is that this was actually something that had been developed internally to the agency and that the younger Morris had gotten a copy of it and set it loose out on the wide world and they were trying to set him up as the fall guy.

Ran Levi: That’s an interesting idea. I can see the attraction to these kinds of theories. Yeah.

[Crosstalk]

Eugene Spafford: So if you don’t trust the accounts you see or you have the wrong idea about what some of these agencies do, then you can easily fall into believing that kind of story.

So there were a number of things there that all kind of came together. It was also a time when personal computers were really becoming more widespread. We had the PC type of computers. We had – the Mac had just been introduced. We had Ataris, Amigas, Commodore PETs, a whole lot of things that were not generally accessible because of their price. But a lot of people were buying them and a lot of people in corporate environments were beginning to see document processing systems, work stations.

So Sun Microsystems, HP Workstations, Xerox Star Systems. So there were a lot of these kinds of things that people now had some – more people had some access to those computers and were now suddenly worried about what might be happening. So it was a time of change and it came just at the right time.

Ran Levi: Just the right time. Yeah. Then Robert Morris turned himself in and was prosecuted and punished. What did you think about his punishment? Do you see that as a fitting punishment?

Eugene Spafford: At the time, I was not 100 percent sure that he should have been charged with the felony version although what he had done was clearly a very bad lapse of judgment and clearly did harm, although the numbers that I think the government came up with were inflated.

His punishment included no jail time. I thought that was appropriate. As time has gone on and we’ve seen this explosion in extremely bad behavior by others, wiper kind of viruses, ransomware.

Ran Levi: Really nasty viruses.

Eugene Spafford: Very, very bad kinds of things that people are doing and reflect upon the fact that Dr. Morris has been really an outstanding member of society, went on to complete his doctoral degree. He started a company or two. He joined the faculty at a university. He has written papers, but not in security. He has never used his notoriety for any kind of fame or money that I know of or that anybody I’ve spoken to has ever heard. In fact he just won’t talk about it.

Ran Levi: Yes, he would never return any of my emails. He never wanted to interview about this.

Eugene Spafford: So I think what he has done there, he really was contrite. He really did regret what happened and has moved on and gone on to do other things. So in retrospect, the punishment was indeed probably too harsh. But at the time, I think many people didn’t realize just how big a problem it was going to be and in part, they wanted to send a message. They wanted to set an example. So that others wouldn’t do the same kind of thing.

Ran Levi: And what have you done since that incident 20 years ago? I mean you are still active in information security.

Eugene Spafford: Yes.

Ran Levi: What are you working on today?

Eugene Spafford: That’s almost 30 years ago actually.

Ran Levi: Yeah, actually 30, yeah.

Eugene Spafford: Yeah. Well, I’ve done a lot of things in the meantime. So I did – as I said, some very early work in intrusion detection, was the one who did the first two-stage firewall. I invented that. I did some actually formal methods of firewall work, started in the field of software forensics. A number of other kinds of things that went along in there and have continued to do research and built out what we did at the institute, looking at advocating for a multidisciplinary approach to security, that it’s not just the computer. It is the people. It’s the economy. It’s issues of psychology and politics and law and a lot of other things.

Ran Levi: I think that’s a mindset that people are really coming to understand nowadays.

Eugene Spafford: Yes.

Ran Levi: The importance of this mindset.

Eugene Spafford: Yeah. So we started that 20 years ago and it has taken a while for people to really understand that, that just giving someone a strong technical background is not going to be enough for them to secure an enterprise and deal with some of the problems involved.

So we’ve continued to try to move in that direction. We offer about five degrees in information security through the university as well as interdisciplinary work that people can design.

So I’ve been doing that. The last couple of students that I’ve had – one of my PhD students is actually here at this meeting with me – former PhD students and what we worked on for her degree was looking at how to assess security for organizations that deal with people at risk that don’t have resources.

So imagine shelters for human trafficking victims and victims of relationship abuse. Their clients are at risk if they are discovered. That’s a real problem. There is a physical threat to some of those people.

A few of those organizations have IT professionals. They’re required to keep records because of government programs and assistance and legal reasons. But they don’t have the kind of protections they need. So we were looking at that problem set as to what could be done to assess them and provide them assistance.

I’m currently looking if there’s a way I can get involved in applying some of what we do to helping to cut down on human trafficking issues.

The second area that I’ve been working on is in – something that I was actually doing back in 1990, 1991 and have returned to, which is how to use deception as a security measure. So that in summary I guess, part of what we did when we built systems to get them to work is we provided lots of information when things didn’t work.

So wrong connections, protocol connections or bad logins. You would get information back that would actually be advantageous to an attacker.

Ran Levi: Even if you didn’t manage to penetrate a system.

Eugene Spafford: Yes, exactly so. So if it would come back and say wrong protocol or give an error code that – for instance, would give information about how an attacker might reformulate their attacks.

Another reason is that we had many people work in computing who have heard of Kirchhoff’s Law but misapplied it. Kirchhoff’s Law was about encryption systems, which said that the security of an encryption system should depend only on the secrecy of the key. Otherwise, the algorithm should be known to any attacker.

People have generalized that in computer systems and say that, well, we shouldn’t depend on any kind of secrecy or lying or anything else for security issues. It should only be in the mechanism. Well, that isn’t completely right because things that we don’t tell others actually do add to the security.

We shouldn’t depend on them as our primary means but they do add to security. So I’ve had three PhD students who have done work on deception and using this as a means of misdirecting attackers, slowing attackers or getting them to reveal information about their motives, all the while enhancing the protection of local systems.

Ran Levi: This is fortunate for me because I have an episode in Malicious Life lined up about deception in information security. If you have 10 more minutes for me, we will talk about that. I didn’t plan to, but this is a fantastic opportunity. Let me just find the questions that I already have for other interviewees. Well, that’s an amazing opportunity for me.

Eugene Spafford: Sure. I’m currently an adviser for Calvio, which is a company that’s in this space.

Ran Levi: OK. So let’s talk about deception. So first things first. When an attacker already penetrated a system, what kind of problems – I will rephrase the question. Usually when we’re talking about penetration attacks, mainly penetrating a system, we are looking at the problem from the perspective of the organization being attacked. Now I want to switch that way of looking at the problem, looking at it from the side of the attacker.

You already entered the system. You’re already inside. What is your challenge right now? What are the problems that you encounter?

Eugene Spafford: A lot of that depends upon the motive and experience of the attacker. So if you are a target that have been chosen specifically by the attacker, then it’s entirely possible that they have a specific goal in mind and they want to ensure that first of all, they can continue to access the system, not be discovered, and carry out their objective, whatever that is.

Other kinds of attacks are simply random. They don’t even necessarily know what system they’re in and they want to find where they are.

Ran Levi: Kind of learn the environment.

Eugene Spafford: Learn the environment. What other systems are there? Can we hop further? They may or may not be interested in concealing themselves or have the expertise to conceal themselves. This is more a problem decades ago where people would break into super computer centers and be typing DOS commands because they had no idea where they were or what kind of system they were on.

Now we have much more sophisticated kinds of attacks. But if the goal of the attacker is to stay stealthy, then one of the things you want to have in place in your security mechanism is not making them aware that they’ve actually been discovered.

So you may not want to immediately kick them off or you may not immediately want to cut a network connection or if you do, you may want it to look like it’s part of a natural behavior of the system going down for maintenance or some other kind of thing.

If they are trying to do a survey, find out what software is running or where they are, you may want to give them a false picture of the system so that they get the wrong information, the wrong things to try and you can get some clue of what they’re looking for or you can keep them connected longer to do trace-backs.

If they are looking for something specific, you may be able to give them a false version of what they’re looking for, so that what they take confounds them. It doesn’t do what they wanted to do. So there are a number of different kinds of deception that you can employ, depending on the motive and nature of the attacker.

Ran Levi: Honeypots are kind of a well-known technique for deceptions, as a type of defense. Let’s talk about honeypots. So what are the basic – what is the basic idea behind a honeypot?

Eugene Spafford: Well, this is interesting. I actually helped a unit at the Air Force build one of the first honeypots back in 1992, before they became more generally known, and the goal that we had at that time – and I think it’s still true for a lot of others – was to engage an attacker and wasting resources and possibly capture some of their tools to find out what it is they were using to get in, to move from system to system, to observe their behaviors, so that you could do something in defense or possibly to leverage that information to defend other systems or attack back as – in the case of the Air Force, what they were interested in doing.

Ran Levi: Attack back the attacker.

Eugene Spafford: Yes.

Ran Levi: How can you use honeypot to attack somebody back, bring the attacker to his side of the court?

Eugene Spafford: Well, one of the things you can do is you can leave executables or files that may have something embedded in them that involves signaling or booby traps. If you have an incautious attacker who goes, “Oh, good. This is what I was looking for. I will run this to see what happens.”

Ran Levi: Now he’s getting a taste of his own medicine.

Eugene Spafford: Yes, yes. I mean because they’re still connected back and so whatever it is that’s – there are some interesting legal questions that have since come up about that kind of work and I don’t know that it’s being used. But at the time, it was a novel idea that was discussed.

Ran Levi: What are the leading problems with this kind of approach?

Eugene Spafford: Well, if the attacker is using a third party system to come into yours and you attack back, you may be damaging an innocent party, possibly one that legally you could be in trouble for, causing damage to that party. That’s one of the biggest and that’s true whether it’s from a civil law enforcement standpoint as individuals or in the case of a military service engaging in that kind of activity. So there are different laws and different kind of sensibilities. I’m not an expert in that, how that’s interpreted and falling out now. But I’m told that both of those are big concerns for those who are interested in following the law.

The biggest use though was to create a whole lot of systems that look real that would engage an attacker, that they would expend tools. They would have to use vulnerabilities. They would have to be visible and create a pattern of activity that could be used to attract them, without actually attacking or damaging their own systems.

Ran Levi: Setting off alarms if I understand correctly, by trying to attack virtual targets, if I understand –

Eugene Spafford: Not only setting off alarms but actually having them go through the motions to allow you to track it in an environment where if they damage files or if they did things, it didn’t really hurt. You could still look at what they were doing and use that information to protect other systems or to understand what they were after.

Ran Levi: Any other deception techniques except maybe using honeypots that you can tell us about?

Eugene Spafford: Sure. Well, my most recent graduate student who has now gone to work for a company in the DC area look at what’s involved in putting out fake software patches. So right now, when a company puts out a security-related patch, it’s usually a matter of a day or two before that patch is reverse-engineered and turned into an attack method and particularly if it’s potentially for a high value system.

So there’s a danger to patches being put out and what a lot of people don’t realize is that for many systems that are out there, that are certified or tested or safety-critical, they can’t apply the patches quickly. There’s no way. So medical devices for instance that have a patch release, it may be months before that can really be applied because just putting a patch in place may break something that a medical device depends on.

Ran Levi: You don’t want to change something radically.

Eugene Spafford: No, not at all because it could affect somebody’s life or a chemical plant control or power systems monitoring or air traffic control or a lot of other kinds of places. They can’t immediately apply the patches. So this creates a vulnerability.

What we wanted to explore was, well, what happens if we issue patches for things that aren’t broken? If we obfuscate the patches in ways that make them harder to reverse-engineer or we flood a system with patches, so that there are so many that it further delays an attacker.

The goal was to significantly slow someone down, not necessarily to prevent them from reverse-engineering because that requires a lot more effort.

Ran Levi: So if I understand correctly, you’re saying maybe if a company creates a patch for one of its products, maybe create several patches just to confuse the attacker of trying to reverse engineer each one of them.

Eugene Spafford: Yes.

Ran Levi: Trying to delay the attacker right at the point where you actually already – putting out the patch, right? Not on the victim, the subject of the attack.

Eugene Spafford: Right.

Ran Levi: Patching up the system.

Eugene Spafford: Right.

Ran Levi: It’s possible to maybe emulate a patch being patched into the – a system which is already trying to formulate a possible attack. It’s possible to take a patch and simulate it – if it is already patched into the vulnerable system, without it actually being patched and changing the state of the system, as if it was –

Eugene Spafford: We didn’t explore that. We did look – we came up with two strategies that sort of met the requirements. The experiments that we came up with indicated it was generally not going to work well because it not only was a lot of effort on the part of the issuer of the patch, the vendor. But it also would make it more difficult to debug systems that had multiple benign things changed on a frequent basis.

So the two things that we came up with, one was you increase the frequency and size of the patches and randomly put the fixes in those patches.

So imagine issuing a large multi-megabyte patch set every day where you insert into that some of the real patches, but you don’t let the attackers know which set it is.

That has some benefit. But again, it turns out to be a tremendous amount of overhead for both the end user and the vendor. But the mechanism that actually looks the most recent right now is – or most reasonable is when you have to release a set of patches. Recompile the whole system with some random values, so that the whole system is varied.

Ran Levi: Not only the patch but the whole system.

Eugene Spafford: The whole system and reissue the system to reinstall because that way, the differential patch can’t be easily analyzed. Patches were originally developed in an environment where bandwidth was minimal and people had to download patches over a period of time.

Currently with network speeds the way they are and storage the way it is, there’s no reason to do incremental patches anymore really, when you can send out the whole thing, and if you’re able to vary where things are placed in memory, load order for each incremental patch, then that’s going to provide a better mechanism to confuse an attacker than issuing any variation of the patches.

So that was one of the research projects that we came up with recently.

Ran Levi: Interesting technique. Fantastic. Very interesting. Anything else that we didn’t cover about the deception maybe that is worth telling about or a technique or something from the side of the attacker maybe?

Eugene Spafford: Well, the next thing that I’m going to look at, if I can get some support for it, is really more psychological that deception works better if you understand the biases of the person you’re trying to deceive. We all carry biases of one kind or another.

Some are social biases and some are just personality-based. But a social bias for example, people brought up in some countries will be more likely to believe figures of authority saying something than individuals farther down in the chain.

So if I wanted to give a false reason for taking the system down, as a system administrator, if I were to send a message saying, “I need to take the system down to fix something,” that’s going to trigger more suspicion and be less likely believed by people who have this bias than saying, “It’s the scheduled downtime for enterprise-wide updates,” or something to that effect because they’re the ideas that are issued by authority. It’s collective identity. But there are hundreds of these biases and it’s not clear which ones can be exploited in deceptive computing. That’s one of the things I want to look at, some as a next topic.

I’m also now looking at a project in the whole area of how ethics and choice are embedded in systems and how this can impact safety and security. We have many people who are programming autonomous systems, autonomous vehicles, internet of things and they’re building their perceptions of right and wrong and what should be done in exceptional circumstances into the software. That may not match how the user wants it done. So –

Ran Levi: What – for example, what kind of a – this kind of bias?

Eugene Spafford: Well, there’s a classic problem in ethics called the “streetcar problem” and if you imagine an autonomous vehicle speeding along a mountain road, you turn a corner. There’s another car with – stalled in the road with a family and some kids. A choice needs to be made at that point because the car can’t stop on time. Do you run into the car and the airbags and the like protect you but maybe kills the people in the car or do you steer the car over the cliff, saving the lives of the people and the family and possibly – and very possibly killing whoever is in the car.

This is a variation of this. There was a recent article in Wired that talked about this. But that choice is likely to be programmed in by a programmer or possibly even generated by an AI system. But that won’t necessarily reflect the choice made by the driver.

I think that’s something that we need to examine. There are many choices. Another one is to – if you have a smart home with a lot of appliances, you may choose as a matter of social conscience to have your thermostat set at 78 in the summer, to have less air conditioning and less carbon load. But somebody else may not care.

If there is no legal restriction, well, you should choose what to do. But if the home system sets that for you rather than giving you choice, then that should be a problem.

I think we have a lot of people building systems now who aren’t thinking about those issues. So I’m doing a little bit of investigation there. I’ve got a student working with me on that.

Ran Levi: That’s an interesting question, specifically the autonomous vehicles one. It’s probably interesting to the general public because I guess that governments will have to step in to that question and take some sort of regulation in those kinds of decisions, not leaving to the hands of engineers who might like to do one thing or another.

Eugene Spafford: It’s very possible that if we take humans out of choice equations here, we may not like the choices that are made. Some people will use that as an excuse not to make hard choices. The computer made me do it. Well, that’s never an excuse that we should allow. We don’t really want to take people out of situations where choice is made that impacts others.

This is a safety issue. It’s a security issue and it is part of what I was saying that this arena of dependable computing, of trustable computing involves more than simply understanding the computers. It involves understanding other things about psychology and law and philosophy in this case.

Ran Levi: That makes the problem very interesting.

Eugene Spafford: Yes.

Ran Levi: The crossover between technology and psychology. Eugene Spafford, thank you very much for this interview. It has been a pleasure.

Eugene Spafford: You’re welcome.

Ran Levi: Thank you.