Does Code === Free Speech?

When the FBI asked Apple, following the 2015 mass shooting in San Bernardino, to write code that would give the FBI access to a suspect's iPhone - Apple refused, arguing that forcing it to write code goes against the First Amendment. Apple's claim wasn't the first time that this highly controversial claim was invoked in judicial proceedings…

Hosted By

Ran Levi

Exec. Editor at PI Media

Born in Israel in 1975, Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.
In 2007, created the popular Israeli podcast, Making History, with over 14 million downloads as of Oct. 2019.
Author of 3 books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.

Does Code === Free Speech?

December 2nd, 2015 was not a good day in California. 

Syed Rizwan Farook and Tashfeen Malik, a married couple who lived in the town of Redlands, went on a killing spree: they targeted a training event at the San Bernardino County Public Health Department and a Christmas party of about 80 workers, held in a rented ballroom. The mass shooting cost the lives of 14 innocent people, and another 22 were seriously injured. Both perpetrators were killed too.

Shortly after the attack, police used robots to search the terrorists’ home. It turned out that the couple built themselves a bonafide weapons depot: the police uncovered large amounts of ammunition, and various tools for building improvised explosive devices.  

On December 3rd,  a day after the attack, the FBI took charge of the case, which was officially designated as an anti-terrorism investigation. FBI agents conducted 400 interviews and collected some 320 testimonies. In particular, the investigation focused on a small time window, shortly before the murder: 18 minutes, between 12:59pm to 1:17pm. The investigators suspected that during that time, the two terrorists drove around San Bernardino, trying to remotely activate the explosives they had left behind – or maybe were in contact with a co-conspirator. 

Searching for clues, the investigators discovered one crucial piece of evidence: A mobile phone – an iPhone C5 device belonging to Farook – found at their home. The phone had the potential to shed some light on the terrorists actions during these missing 18 minutes – but there was a problem. The device was locked with a four-digit password, and as with all iPhones, after ten failed attempts to get in, the device was programmed to delete all the information it carried. A nightmare for the FBI.

After some unsuccessful attempts to open the locked device, the FBI officially gave up and announced that it failed to break into Farook’s iPhone. They tried getting help from the NSA, but to no avail; senior US security experts have also failed in assisting the investigation. 

A Last Resort

As a last resort, the FBI turned to the manufacturer: APPLE lnc. They asked Apple to create – just for them – a new version of iOS, the IPhone operating system, that could be installed and run in the random access memory of the machine, disabling certain security features – probably those relating to the screen lock’s password. Faced with such a request from the FBI – some, perhaps even most companies, would probably comply. But Apple, who’s notoriously protective of its customer’s privacy – refused. 

In response, the FBI applied to a United States magistrate judge, Sheri Pym, asking the court to issue an order requiring Apple to create and provide the requested software. Apple kept insisting it would not do it.

“If Apple can be forced to write code in this case to bypass security features and create new accessibility,” wrote Apple’s attorney, “what is to stop the government from demanding that Apple write code to turn on the microphone in aid of government surveillance, activate the video camera, surreptitiously record conversations, or turn on location services to track the phone’s user? Nothing.”

But Apple had an even stronger, and perhaps a more unusual, claim: It invoked the First Amendment. The First Amendment, for you non-american listeners, is a decree that guarantees the freedom of speech to all American citizens. Naturally, it is considered by most people to be a sacred right. 

But what does the First Amendment have to do with cracking an IPhone, you ask? Well, let’s say, just for the sake of argument, that software is speech. Now, you can’t force a person to say things they don’t want to say, can you? They are protected by the First Amendment. In the same way, if code is speech, then you can’t force Apple to write a software that it doesn’t want to write. That was the argument presented by Apple’s lawyers: code IS speech, they claimed – and therefore, writing it is protected by the First Amendment. FBI or not, no-one will make them write even a single line of code.

“The government asks this Court to order Apple to write software that will neutralize safety features that Apple has built into the iPhone in response to consumer privacy concerns”, wrote Apple’s attorney. “This amounts to compelled speech and viewpoint discrimination in violation of the First Amendment.”

Very creative. However, creativity is not enough, when you stand in court. Apple’s lawyers had some very good reasons to take the First Amendment’s line of defense, in a technological case: It wasn’t the first time that this argument was invoked in judicial proceedings. That time, it wasn’t a big-tech corporation, but a single programmer, who tried to protect his own code from the government.

Daniel J. Bernstein

Daniel J. Bernstein was a prodigy. Born in New York, he graduated high school at the age of 15, and went on to win awards in several mathematical competitions. He received a Bachelor’s degree in mathematics from New York University, and later on a PhD from the University of California, Berkeley.

As a mathematician, Bernstein was almost obsessed with Cryptography, and in 1990, he developed an encryption software he called “Snuffle”. Snuffle allowed the sending and receiving of texts between two people, who have previously exchanged private cryptographic keys. Snuffle worked by encrypting and sending each character as it was typed, and so allowed for real-time conversations, the kind we’re used to conducting every day through apps such as WhatsApp and Telegram.

Bernstein was obviously proud of his software. It uniquely solved a problem that hadn’t been addressed before, in the same way. It also seemed to have many potential uses. He wanted to publish the process he developed, to get some academic feedback, maybe even to gain some fame. He initially thought of putting it in a paper and presenting it at an international conference on cryptography, called sci.crypt. But it turned out that that wasn’t so simple.

In order to publish this paper, he discovered, he needed special approval from the US State Department – even though the software he wrote, and the algorithm behind it, were developed solely by him, without any government support, or funding.

Why the need for approval? It was due to old Cold War regulations regarding the export of cryptographic systems. Ironically, the mathematical algorithm invented by Bernstein, the basis for Snuffle, was allowed to be exported without any legal problem. But the actual code, the program that uses the algorithm to demonstrate how it actually works, was under the category of materials that were prohibited by law to be exported from the United States. Without approval, Bernstein could not present Snuffle at the international conference he dreamt of attending.

Why was the law so tough on Daniels’ homemade software? A little bit of history. During World War II, cryptography was used extensively by all sides. Cracking codes had a critical role in bringing the war to an end, and winning against the Axis Powers. The infamous enigma code used by Nazi Germany was decoded in England using primitive computers and the genius of Alan Turing, along with many others scientists. That’s why, since then, cryptography was considered so dangerous.

The US administration, like that of other NATO countries, had guessed correctly: cryptography will play a major role in future international conflicts, and will become more prevalent as technology advances. That’s why, in the post-war era, two types of technology were protected by export control regulations. Meaning: it is forbidden, without special permission, to take them out of the United States. First category: technology related only to weapons of war (“ammunition”). Second category: dual-use technology, that is, technology that can be used for both peaceful and military purposes.

A cryptographic system like Snuffle falls, very naturally, into the second category. A person or company that wants to publish it internationally, or, in other words, to export this type of system, must first obtain a license from the Ministry of Foreign Affairs. The license states that anyone exposed to the publication must be monitored and reported to the government, in order to ensure that no potentially hostile person gets to see the sensitive information. Violation of these export regulations can result in a fine of one million dollars, and ten years in prison. 

Daniel Bernstein had good reasons to be worried. To avoid getting into serious trouble, he decided to be proactive. He wrote a letter to the State Department requesting permission to export Snuffle 5.0 (the latest version, at the time) as well as permission to publish the documentation for the application. Here is a part of the request, in Daniel Bernstein’s own words:

“In effect what I want to export is a description of a way to use existing technology in a more effective manner. I do not foresee military or commercial use of Snuffle by anyone who does not already have access to the cryptographic technology contained in it. I do foresee practical use of Snuffle by those who do have such access, in particular for the purpose of interactively exchanging encrypted text.”

In 1992, less than 2 months later, he got a reply from the Director of the Office of Defense Trade Controls:

“This commodity is a stand-alone cryptographic algorithm which is not incorporated into a finished software product. As such, it is designated as a defense article under U.S. Munitions List […]. Licenses issued by this office are required prior to export.”

In other words, the director said “no”.

Bernstein Sues

But he knew nothing about his young opponent. Daniel Bernstein wasn’t the kind of person to give up after one refusal. He went on to submit FIVE more requests, asking to publish the source code for his system, along with explanations, written in English. The State Department denied all five.

The next step was inevitable. In February 1995, Bernstein sued the government in federal court: the District Court for the Northern District of California. In practice, he challenged the degree of constitutionality of ITAR: The International Traffic in Arms Regulations. In plain English: he wanted to show that the ancient laws governing the publication of computer code are unconstitutional.

Bernstein was represented in court by some of the best lawyers in the field – provided for him by The Electronic Frontier Foundation: a non-profit organization dedicated to protecting civil liberties in the digital world. Since its inception in 1990, the EFF enlists the help of leading technology people, activists and lawyers to protect freedom of expression on the Internet, and to support technologies that promote freedom. 

It’s likely that the EFF decided to step in and help Bernstien because by that time, the early 1990’s, software was already the focus of some very important debates that had major implications for the tech industry in general – such as the debate regarding the patentability of code: that is, is software protected by the same patent laws that protect other results of human creative endeavors. That question had strong ties to the question of software being protected by Free Speech – as evident by a rather famous paper – well, at least among law/computer nerds. It was written in 1991 by an American programmer called Phil Salin, and was titled “Freedom of speech in software”. There’s no doubt this paper had a big influence on Daniel Bernstein, when he sued the Government a few years later.

Salin’s paper is mainly focused on why computer programs shouldn’t be subjected to patent law. And it has some fascinating points in it. Quote:

“Although a program has to be run to be used, before it can be run it has to be written. There are now millions of individuals in the U.S.A. who know how to write a computer program. It is an absurdity to expect those millions of individuals to perform patent searches or any other kind of search prior to the act of writing a program to solve a specific problem. If others wish to purchase a program, as with the sale of written prose and written music, absolutely no patent restrictions should be placed on the ability of authors to sell or publish their own writings.”

Funny how Salin is thinking that searching anything before writing a program is absurd – but then again, Google and Stackoverflow were still a couple of years in the future… And while Salin’s paper is mainly focused on why computer programs shouldn’t be subjected to patent laws, it follows that any kind of regulation over programming is basically unacceptable. If it’s legal to publish all sorts of problematic text, as they are protected by the first amendment, then the same goes for writing code. All kinds of code.

But does this perspective on writing code really make sense? Not everyone thinks so.

Is Code = Speen a fallacy?

Neil Richards, a law professor at Washington University in St. Louis, has a different point of view. In an article published by the MIT technology review in 2016, right when the FBI vs. Apple case was taking place, Richards attacked Apple’s line of defense of equating code to speech, following Daniel Bernstein and Phil Salin. In the article he agrees that the idea of code being equal speech has a lot of appeal, but claims that in the actual world it could be dangerous.

“Code = Speech is a fallacy because it would needlessly treat writing the code for a malicious virus as equivalent to writing an editorial in the New York Times. Similarly, if companies use algorithms to discriminate on the basis of race or sex, wrapping those algorithms with the same constitutional protection we give to political novels, would needlessly complicate civil rights law in the digital age. It’s easy to argue that Code = Speech, but accepting that argument would create a mess, and an avoidable one at that.”

It’s a valid point. If computer code is protected by the First Amendment, then we can do nothing about someone who uses code to write malware. If software development is protected by the first amendment, then selling ransomware is absolutely legitimate. The thing is, there’s a difference, a subtle difference, between words used in everyday life, and words in a computer program. Words in computer programs look like real words, sometimes, at least if you’re coding in Python, but they do things that normal words, in regular speech, don’t do.

For instance, words we say can hurt someone’s feelings, especially if they’re written in a newspaper, or put up on a billboard. They can try and convince people to act violently. Words are a tricky thing. But words in computer code context don’t just suggest ideas. They actually DO things. They can steal money from a bank account, copy private information for identity theft, or stop critical machinery from working, unless someone pays in cryptocurrency. Software can obviously do all that. And we definitely don’t want these kinds of words, this kind of writing, to be protected under any amendment. Sometimes we just want such activities to be… what’s the word? Illegal. 

“Computer language is just that, language.”

Yes, I know I’m over-simplifying a very complex and sensitive topic. There are plenty of cases where the distinction between malicious and non-malicious code is not easy to make: for example, a remote access software that can be perfectly useful in all kinds of business settings – but very malicious in others. But at least in Bernstein’s case, Snuffle was obviously non-malicious, and so although it took the court four years – in the end it ruled that software code source was, in fact, speech – and therefore protected by the First Amendment. The government’s regulations preventing Snuffle’s publication outside of the USA were unconstitutional. No one can tell Daniel Bernstein not to publish his code. Not even the Director of the Office of Defense Trade Controls.

“The distinguishing feature of source code”, stated the court,  “is that it is meant to be read and understood by humans, and that it can be used to express an idea or a method. By utilizing source code, a cryptographer can express algorithmic ideas with precision and methodological rigor that is otherwise difficult to achieve”.

And pay attention to this: “This court can find no meaningful difference between computer language, particularly high-level languages as defined above, and German or French….Like music and mathematical equations, computer language is just that, language, and it communicates information either to a computer or to those who can read it. In light of these considerations, we conclude that encryption software, in its source code form and as employed by those in the field of cryptography, must be viewed as expressive for First Amendment purposes, and thus is entitled to the protections of the prior restraint doctrine.”

It was a dramatic decision that led to some significant regulatory changes. The Cold War days were finally over, and from the late 1990’s and on, the rules concerning the export of cryptography were being relaxed. In 2009, regulation of commercial encryption was moved from the military to the hands of the Department of Commerce, even though some restrictions on the export of cryptographic software still apply – such as when it comes to terrorists organizations and countries who support them. 

An Unfinished Story

Now back to the FBI and Apple. As a reminder – Apple refused to write the software that would crack open Farook’s iPhone. Their lawyers used, among other arguments, the Bernstein precedence: If code is speech, then writing code is protected by the First Amendment. Meaning, the FBI can’t force Apple to write software that Apple doesn’t want to write.

Could such a claim actually win the case for Apple, with an investigation into a terrorist attack at stake?… We will never know. The next crucial hearing was set for March 22, 2016, but a day before the deadline, the government announced that a third party – an Isralie company named Cellebrite, according to various media reports –  had been found who could help the FBI crack the iPhone. 

On March 28, less than a week later, it was declared that Farook’s phone had been hacked. Apple’s services were no longer needed. Two years later, the Los Angeles Times reported that

“the FBI eventually found that Farook’s phone had information only about work and revealed nothing about the plot.”

You could say that the FBI’s decision to pull out of the Apple case was a sort of an anti-climax, but in fact – it isn’t. It just means that the story isn’t over yet. In all probably, at some point in the not-so-far future, someone will do something illegal, and the government will be in a desperate need to crack open his or her electronic device. When that happens, the court will find itself at the exact same spot where the FBI and Apple left off – and only then will we discover if writing code REALLY falls under free speech. I bet there are quite a few people in Cupertino not looking forward to that day.