Deeplinks
Smartphone users in California take notice: a new CA State Assembly bill would ban default encryption features on all smartphones. Assembly Bill 1681, introduced in January by Assemblymember Jim Cooper, would require any smartphone sold in California “to be capable of being decrypted and unlocked by its manufacturer or its operating system provider.” This is perhaps even more drastic than the legal precedent at stake in Apple’s ongoing showdown with the Justice Department, in which the government is trying to force a private company to write code undermining key security features in specific cases.
Both Apple and Google currently encrypt smartphones running their iOS and Android operating systems by default. A.B. 1681 would undo this default, penalizing manufacturers and providers of operating systems $2,500 per device that cannot be decrypted at the time of sale.
Similar proposals have been made by Manhattan district attorney Cyrus Vance Jr., who published a white paper [pdf] in November 2015 arguing that law enforcement needs to access the contents of smartphones to solve a range of crimes. A nearly identical bill is also pending in the New York State Assembly.
EFF opposes A.B. 1681 and all other state proposals to regulate smartphone encryption because they are terrible policy. If passed, A.B. 1681 would leave law-abiding Californians at risk for identity theft, data breach, stalking, and other invasions of privacy, with little benefit to law enforcement. It would be both ineffective and impossible to enforce. And, if that weren’t enough, it suffers from serious constitutional infirmities.
Meanwhile, in the U.S. Congress, Representative Ted Lieu has introduced H.R. 4528, the ENCRYPT Act, which would definitively preempt state bills like A.B. 1681. EFF agrees this is the right approach to state legislation in this area, although we’d like H.R. 4528 to go further and also prevent Congress and the rest of the federal government from undermining encryption.
The Benefits of Smartphone Encryption
Smartphones carry an astounding amount of personal information; it’s what makes them so useful. As the Supreme Court recognized in 2014, they hold nothing less than “the sum of an individual’s private life.” This makes smartphones ripe for theft, hacking and other unwanted access to personal data. Anyone following the Office of Personnel Management hack knows breaches are a problem. Theft is also a serious concern: A 2014 survey found that fully 10 percent of individuals whose phones were stolen were then victims of further identity or data theft, and 12 percent had fraudulent charges on banking or credit card accounts. And according to Consumer Reports, more than 3 million smartphones were stolen in 2013 in the United States.
Additionally, some smartphone users’ physical safety is at risk when others get access to their personal data. Domestic violence victims and political activists both domestically and in authoritarian regimes abroad all depend on data security to protect themselves.
The best way to secure phones against these dangers is to encrypt all of the contents, so-called full disk encryption (FDE), using a key held solely by the user. Apple moved to FDE by default in 2014, followed by Google. On iPhones, this key is generated by combining a user-selected passcode with a unique identifier associated with the phone and unknown to Apple. Unless the user unlocks the phone, no one—not hackers, thieves, or abusive exes, nor even Apple or the police —can access its contents. (That’s putting aside the kind of serious reengineering the FBI wants in the San Bernardino case, of course). Experts in cryptography and computer science are unanimous that this is the only feasible way to keep data on phones secure. That’s because “key escrow” or otherwise “backdoor” encryption schemes—in which third parties like Apple hold a copy of the key—introduce profound vulnerabilities into the system. In other words, if you create a way for someone else to access the data, malicious hackers or others can discover and abuse that access as well. So Apple’s inability to unlock a phone even pursuant to a warrant is a necessary side effect to FDE’s security.
What’s Wrong with A.B. 1681 and Why States Should Stay Out of Encryption
The California bill does not specify how a phone’s data would be decrypted — whether through backdoors or by simply turning FDE off by default.
In either case, the bill could not possibly achieve its goals.
First, and most obvious, it would stop at the California border. Apple could still sell encrypted phones in the rest of the country. California buyers could simply cross into the next state.
Second, even if Apple and Google removed FDE, numerous third-party applications provide the same functionality. These applications are also beyond California’s reach. In fact, over two-thirds of encryption software projects are created at least partially outside the U.S., and many are open-source, meaning they are not controlled by any single entity. Sophisticated criminals would certainly avail themselves of these options.
And finally, if compliance with A.B. 1681 required only turning off default FDE, evading the police would be as simple as flipping the switch after buying a new phone. The boost to law enforcement would be minimal at best.
The costs, however, of this woefully ineffective law would be unacceptably high. Depriving the rest of us of easy-to-use FDE puts the sensitive information we all carry in our pockets at serious risk.
Further, there is very good reason to think A.B. 1681 and similar state bills would be unconstitutional. The Supreme Court has explained that states cannot enact laws that burden interstate commerce when “the burden imposed on such commerce is clearly excessive in relation to the putative local benefits.” In light of the bill’s lopsided cost-benefit tradeoff, it seems unlikely to survive this analysis. Also, to the extent developers would be prohibited from offering FDE as part of their code, the law raises First Amendment concerns. Under the Bernstein case and its progeny, computer code is protected speech, and a government ban on this speech based on its content is subject to First Amendment scrutiny. Once again, it’s hard to see such a law surviving this test.
In sum, while it’s imperative that law enforcement investigates serious crimes, A.B. 1681 is hopelessly flawed. Take action and tell lawmakers not to support this misguided bill.
A version of this post first appeared as an editorial in the Daily Journal newspaper.
EFF recently received records in response to our Freedom of Information Act lawsuit against the Department of Justice for information on how the US Marshals—and perhaps other agencies—have been flying small, fixed-wing Cessna planes equipped with "dirtboxes”: IMSI catchers that imitate cell towers and are able to capture the locational data of tens of thousands of cell phones during a single flight. The records we received confirm the agencies were using these invasive surveillance tools with little oversight or legal guidance.
The Wall Street Journal revealed that the Marshals have been flying planes using DRT’s Stingray-like technology since 2007. The planes reportedly were based out of five metropolitan airports and shared by multiple agencies within the DOJ, even as sources within the agency questioned the legality of the program. A follow-up article reported that the CIA provided cell phone tracking equipment to the Marshals and then spent years helping them develop and test this capability for use in a law enforcement capacity within the United States.
After months of stalling, the government finally produced records from agencies including the Marshals, the FBI, and the DOJ’s Criminal Division, which oversees federal criminal prosecutions. The documents we’ve received—many with extensive redactions—are all available here.
The FBI produced the majority of the records—hundreds of pages of heavily redacted material. The documents are mostly internal emails and presentations going as far back as 2009, including discussions between FBI lawyers and the Operational Technology Division (OTD), which develops and oversees the FBI’s surveillance techniques. The documents paint a picture that is similar to the one that has emerged around stingrays and IMSI catchers more generally: the FBI began testing and then using dirtboxes on planes without any overarching policy or legal guidance on their place in investigations.
This is best seen in a series of emails from June 2014, showing FBI lawyers really had no idea what agents at the Bureau were doing with this surveillance equipment. FBI lawyers prepared a briefing for senators who demanded [pdf] more information regarding an Associated Press report about the FBI’s use of aircraft fitted with a wide range of surveillance equipment, including IMSI catchers. Lacking any comprehensive information, the OTD reached out to other branches of the Bureau to document “CSS (cell site simulator) aerial missions.” Ultimately, an OTD Special Supervisory Agent reported only five such missions, but that came with some major caveats [pdf]:
Notably, these missions were carried using equipment owned by the FBI [pdf], not the Marshals.
Although the FBI’s “first successful airborne geolocation mission involving cellular technology” apparently occurred sometime in 2009 [pdf], even as late as April 2014 lawyers from the FBI’s Office of General Counsel were discussing the need to develop a “coordinated policy” and “determine any legal concerns” [pdf].
As we’ve written about extensively, the government long took the position that using IMSI catchers did not require a warrant, instead relying on a lesser legal standard. Last fall, the DOJ voluntarily changed position and required a warrant for the use of cell site simulators, although the policy leaves some major loopholes, and it could be undone by the next administration. Thus, it’s not surprising that the FBI seems to have put the cart before the horse and used its nifty flying Stingrays without deliberating too deeply about the wisdom of doing so.
Given the level of detail in the original news reports about the Marshals’ use of dirtboxes, we’d expected to receive lots of FOIA documents from the agency. Not so. Instead, we got a single policy document from the Marshals’ Technical Operations Group (TOG) that discusses the TOG’s organization and procedures. There are scattered references to aerial surveillance and the use of cell site simulators, but nothing that documents the Marshals’ years of dirtbox use. While it certainly wouldn’t be the first time law enforcement agencies operated surveillance equipment without much oversight or documentation, it’s very hard to believe the operations described in the press would not have generated a bigger paper trail—contracts, purchase orders, contracts, legal memoranda, and so on. We asked for these documents in our FOIA request, and we’ll be arguing that the Marshals Service didn’t follow the law when it responded with this single document.
We’re sure there’s more to be gleaned from these documents, so we encourage you to look through the documents and see what you find. And, of course, we’ll be hoping to force more transparency when we challenge the government’s redactions in court later this spring.
Yesterday, the Let's Encrypt CA issued its millionth certificate. This is a perfect occasion for us to talk about some plans for the CA and client software through the rest of 2016.
In April of this year, all of the clients for Let's Encrypt will be renamed to be clearly distinct from the CA service offered by ISRG. The Let's Encrypt python client has primarily been an EFF project, so we'll start hosting it to make that clear.1
The python client is currently the most popular — though by no means the only — way to get certs from Let's Encrypt. We expect it will continue to be popular:

All ACME clients are designed to obtain certificates from Let's Encrypt (or other CAs that might choose to use ACME). Our client goes a little further, with the option to install certificates in a wide range of web server software, and help admins get the security settings for their systems right. In the short run, the 0.5.0 and 0.6.0 releases will prioritize offering elegant hooks for OS packages to offer fully automated renewal and shipping the first version of our Nginx integration plugin.
Later in 2016, we'll be working to help web developers with a number of the other tasks that currently make correct TLS deployment very difficult, including detection and mitigation of mixed content problems; detecting when sites are ready for an HSTS header and offering to deploy one gradually; offering realtime mitigation against TLS vulnerabilities like Heartbleed, BEAST, CRIME, Logjam, DROWN, etc (at the moment, the client enables good TLS settings when a cert is first installed in Apache, but doesn't support changing them when best practices change); and expanding support to install certificates and offer security enhancements to popular email server software.
Thanks to everyone who has helped to make the launch of Let's Encrypt such a success; we hope that both the server and client projects continue to produce spectacular results in 2016.
- 1. Let's Encrypt includes many sub-projects. There's a serverside codebase called boulder that has been written largely by EFF and Mozilla staff and contractors, though ISRG is now beginning to hire its own engineering team; there's an operations team at ISRG that actually keeps Let's Encrypt's servers running; there's the client, which was written primarily by EFF and open source contributors, with some great packaging assistance from Mozilla; and ACME protocol which was designed by Mozilla, UMich and EFF engineers. Aside from hosting the renamed client project, EFF will continue to provide substantial organizational support for ISRG and engineering resources for the Let's Encrypt server code and the ACME protocol.
Publicly Funded Research Should Be Publicly Available
When you pay for federally funded research, you should be allowed to read it. That’s the simple premise of the Fair Access to Science and Technology Research Act (S.779, H.R.1477), which was just passed out of a major Senate committee.
Under FASTR, every federal agency that spends more than $100 million on grants for research would be required to adopt an open access policy. Although the bill gives each agency some flexibility to develop a policy appropriate to the types of research it funds, each one would require that published research be available to the public no later than 12 months after publication.
Longtime EFF readers will note that we’ve discussed FASTR here several times before, and each time, its acronym has grown a bit more ironic. Its path is finally open. Now it’s up to us to tell Congress that we can’t wait another year.
As we’ve noted before, FASTR isn’t a perfect bill. An ideal open access mandate would require that the research be shared under a license that allows anyone not only to read the research, but also to reuse and redistribute it for any purpose. Read-only access is a big step in the right direction, but it’s just a step.
The Senate Committee on Homeland Security and Governmental Affairs made one unfortunate amendment to the bill, raising the embargo period for research from six months to 12. If six months was already a long time to wait for access to cutting-edge research, then 12 is an eternity.
The 12-month embargo puts FASTR in line with an existing White House Office of Science and Technology Policy memo requiring agencies to develop public access policies. It’s disappointing that the new Senate version of FASTR doesn’t break much new ground beyond what the White House memo currently requires. On the other hand, a new administration is just around the corner. Codifying the open access mandate in law will ensure that future administrations make publicly funded research available to the public.
Let’s make the message loud and clear to Congress: this is the year for a federal open access mandate.
If you are a company that collects customer data, it’s your job to protect it. Your customers expect it. You can’t dodge that responsibility by altering your terms and conditions, especially when finding them is equivalent to playing “Where’s Waldo?” on your website.
This is not only outrageous, but in EFF’s view, also not legally enforceable.
VTech, Hong Kong-based maker of many children’s digital toys, apparently doesn’t see things this way.
First, a little background. In November 2015, VTech was hacked and information of as many as 6.3 million children and 4.8 million parents was compromised. Data exposed by the breach consisted of children’s names, age, gender, photos, chat logs, and information linking them to their parents and their home addresses. After downplaying the extent of the hack, VTech finally came forward with the details, including an estimate of the number of victims by their country of residence.
The hack was remarkable because after a year of other high-profile breaches like Ashley Madison and OPM, VTech was found employing spectacularly outdated security practices and software. For instance, the site where user accounts were created had no SSL encryption, company was using severely weak MD5 hashes to scramble user passwords, and API calls were returning unrelated database queries when they should have been locked down, among others.
Since then, VTech has been working with experts to improve its security and it’s evident, especially in the now SSL encrypted webpages belonging to the company. However, given the company’s basically non-existent security just a few months ago, it’s surprising that its strategy of customer reassurance consists of disclaiming all responsibility for protecting user information.
In an obscure link, the company says this of its responsibility to protect user information:
We know that there’s no such thing as “perfect” security, but when you are caught with bad practices in a banner year for data breaches, you should be dedicated to securing your users’ information instead of hiring lawyers to sneakily limit your liability. Especially when that supposed exemption from liability is communicated to users by hiding it deep inside a mountain of text.
The near-complete opaqueness via which these changes in terms and conditions are communicated becomes even more obvious given their non-existence on the website that’s specifically designed to relay to parents the status of services affected by the breach. Instead, on that page, VTech paints a picture of working hard to protect user data and that parents and children can rest easy:
A mention or a link to VTech absolving itself of all responsibility in case of a breach would have been nice here.
Lastly, in two of VTech’s major markets, US and Europe, experts agree that these terms and conditions may be unenforceable. In Europe, there are data protection laws that require companies to secure their customers’ data.
In the United States, EFF’s view is that Children’s Online Privacy Protection Act (COPPA) requires that companies collecting data from children under 13 use reasonable means to protect it. This is what the first COPPA FAQ on the FTC website applicable to service providers says:
Maintain the confidentiality, security, and integrity of information they collect from children, including by taking reasonable steps to release such information only to parties capable of maintaining its confidentiality and security
Given the new terms’ near unenforceability, significant lack of good faith in communicating them to users, and the ill will they are garnering from the Internet at large, VTech should do the right thing and get rid of them.
VTech’s resources would be better spent ensuring its customers’ sensitive data is secure, instead of finding ways to get out of that responsibility via legal trickery.
The Open Source Initiative, a nonprofit that certifies open source licenses, has adopted an important principle about standards, DRM, and openness, and just in time, too.
The World Wide Web Consortium (W3C), which makes the core standards that the Internet runs on, is in the midst of a long, contentious effort to add "DRM" (Digital Rights Management1) to HTML5, the next version of the Web. Laws like the Digital Millennium Copyright Act (which has analogs all over the world) give companies the power to make legal threats against people engaged in important, legitimate activities. Because the DMCA regulates breaking DRM, even for legal reasons, companies use it to threaten and silence security researchers who embarrass them by pointing out their mistakes, and to shut down competitors who improve their products by adding legitimate features, add-ons, parts, or service options. The Web relies on the distributed efforts of independent security researchers, and its historic strength has been the ability of companies and individuals to innovate without permission, even when they were disrupting an existing business.
We tried to dissuade the W3C from adopting DRM, but failed. Now we're on to Plan B, a proposal modelled on the W3C's existing policies, which asks companies to promise not to sue security researchers or competitors for the mere act of breaking DRM. Companies still can sue anyone who hacks their users, violates their copyrights, or interferes with their service -- but they have to use laws specific to those activities. We call it a non-aggression covenant, and by signing it, companies only give up the right to sue people who've done nothing wrong. The covenant doesn't interfere in any way with all the rights companies get under other copyright laws, torts and trade secret laws.
No one's ever tried anything like this, because no open standards body like the W3C has ever tried to standardize something as divisive as DRM. Our solution is a new one, but it's also a good one.
Today, the Open Source Initiative validated our approach. They adopted a set of "Principles of DRM Nonaggression for Open Standards," based on our proposal to the W3C, telling standards bodies that their work can only be called "open" under OSI's definition if they take steps to protect implementers and security researchers:
An "open standard" must not prohibit conforming implementations in open source software. (See Open Standards Requirement for Software).
When an open standard involves content restriction technology commonly known as Digital Rights Management (DRM)—either directly specifying an implementation of DRM or indirectly consuming or serving as a component within DRM technology—the laws in some jurisdictions against circumvention of DRM may hinder efforts to develop open source implementations of the standard. In order to make open source implementations possible, an open standard that involves DRM needs an agreement from the standards body and the authors of the standard not to pursue legal action for circumvention of DRM. Such an agreement should grant permission to:
- circumvent DRM in implementations of the open standard
- distribute implementations of the open standard, even if the implementation modifies some details of the open standard
- perform security research on the open standard or implementations of the open standard, and publish or disclose vulnerabilities discovered
We are deeply appreciative of the OSI's support for this approach. The core standards of the Internet are on a collision course with a notoriously bad law, and with their help, we may be able to steer it clear of the worst danger.
- 1. Or Digital Restrictions Management
One of the United States government's priorities in Internet policy is encapsulated by a term that's recently been making the rounds; the "free flow of information." It appears almost every time U.S. officials describe how they intend to protect the free and open Internet, especially when it comes to international law. The general idea is that bits of online data should not be discriminated against, hindered, or regulated across national boundaries. As a general principle, this sounds positive. It could be a helpful antidote against arbitrary data localization rules that threaten to break up the global Internet, or attempts by governments' to block and censor foreign websites using nationwide filters. At least, that is the claim that officials such as the U.S. Trade Representative Michael Froman have made, saying, with reference to the TPP:
And we will continue to press our partners to allow digital information to cross borders unimpeded. We are working to preserve a single, global Internet, not a Balkanized Internet defined by barriers that would have the effect of limiting the free flow of information and create new opportunities for censorship.
But does this rhetoric always represent a real commitment to advancing freedom of expression online? We can find the answer by looking and how the free flow of information has been enshrined in the Trans-Pacific Partnership Agreement (TPP). In TPP, it's not quite such the high-minded ideal. Instead, it slants Internet regulations towards business interests, while doing very little in practice to counter human rights issues like government censorship.
Trade agreements like the TPP and the Trade in Services Agreement (TISA) already carry free flow of information rules (and we expect it to appear in the Transatlantic Trade and Investment Partnership [TTIP] as well). In a trade context, free flow is intended to be make a link between the ideal of free trade and eliminating obstacles to global commercial exchange, and the issue of whether nations can restrict the movement of data to within their borders: for instance, by passing data localization statutes.
Beyond the narrow issue of data localization, however, the free flow of information policies we've seen and analyzed in existing international treaties fails to meaningfully protect free expression and access to knowledge, the fundamental rights that such "free flow of information" might be expected to uphold in the first place. Many groups are concerned that these trade treaty provisions could even be used to undermine other fundamental rights: for instance, by claiming that privacy laws such as the European Union's data protection regulations are a violation of free trade, since they impose conditions on the transfer of personal data outside the EU.
The TPP illustrates these shortcomings well. Its free flow of information rules would only be enforced for foreign enterprises, and only those entities based out of countries that have signed the TPP. So if a country were to enact a law banning some type of online content, the TPP's free flow of information rules would do nothing to prevent the enforcement of that censorship against websites or platforms that are locally-owned in that country. Likewise, the trade agreement's rules would be unenforceable against the censorship if the government were demanding content be removed from platforms operated or owned by a company based outside of the twelve TPP countries.
Take for example, Malaysia, which is a signatory to the TPP. The country intensified its censorship of news and criticism since last summer, when a national scandal broke out over allegations that the Malaysian Prime Minister had misappropriated $700 million from a state development fund. Following Malaysia's more recent censorship of one of the country's major news outlets, the Malaysian Insider, the U.S. State Department denounced these policies in a public memo it sent last week, asking them to "fully respect freedom of expression, including the free flow of ideas on the internet." The phrasing of the State Department's request prompted some to speculate whether the TPP could be used to thwart this kind of censorship by signatories to the TPP in the future.
But the TPP's free flow of information rules would have little effect on such blockages or the removal of online content. The rule cannot apply to the government's journalistic suppression against the Malaysian Insider because the newspaper is a Malaysian publication, and therefore not a “covered person” under the TPP's definition. It would also not be applicable to the government's blocking of the Sarawak Report, which is operated out of the United Kingdom, a country that is not involved in the TPP.
If the Malaysian government began imposing blocking orders on U.S.-based companies, such as Twitter or Facebook, it's feasible that once the TPP were to go into force that those companies could convince the U.S. government to bring a trade complaint under the free flow of information provisions to shield them from such an order. However, the Malaysian government could defend itself against such a claim by arguing that its censorship was “to achieve a legitimate public policy objective,” such as national security or the defense of public morals. The TPP allows this as an exception to the free flow rules. Even if such a claim were brought, it would be heard by international trade arbitrators, who are not human rights experts. So of course this would hardly be an appropriate venue for hearing global Internet censorship disputes.
“Free flow of information” as a political term has been adapted to play two very different roles in international relations. Outside the world of trade, it is a human rights issue, meant to challenge the idea of national censorship and the fragmentation of the Internet. Inside the narrower and closed context of trade agreements, however, it is used exclusively to tackle the commercial consequences of restrictions on data flows, like data localization and national privacy laws.
Once again, the values of trade agreements and the wider values of the Internet are a clumsy and incomplete match. That is why the TPP and other trade agreements that try to tackle Internet issues under the guise of trade, remain bad for the Internet and bad for individual Internet users.
~
Are you in the U.S.? If so, take action now and call on your Congress members to hold congressional hearings about the contents of the TPP immediately, and demand that they reject the deal when the agreement comes up for an eventual ratification vote:

Last week, EFF filed a brief in support of Apple’s fight against the FBI, in which we argued that forcing Apple to write—and sign—a custom version of iOS would violate the First Amendment rights of Apple and its programmers. That’s because the right to free speech sharply limits the government’s ability to compel unwilling speakers to speak, and writing and signing computer code are forms of protected speech. So by forcing Apple to write and sign an update to undermine the security of iOS, the court is also compelling Apple to speak in violation of the First Amendment. Along with our brief, we published a “deep dive” into our legal arguments, which you should check out before reading further.
Our argument got some positive attention, but it’s also raised valid questions from folks who aren’t totally convinced. This (long) post attempts to clear up some of those questions.
A caveat: First Amendment doctrine has a lot of facets. Much as it would be nice to present a grand unified theory of free speech, that isn’t the function of a legal brief, or of this FAQ. We’ve made an argument that is firmly grounded in First Amendment case law and that fits the particulars of Apple’s case. Nevertheless, it’s important that our argument be consistent with well-accepted government practices. We think what the FBI wants Apple to do is unprecedented, and an Apple win here wouldn’t risk making every government regulation into a constitutional violation.
With that out of the way, here are some common questions we’ve heard:
Isn’t Apple’s signature more like an instruction to the iPhone and not speech at all?
In order for the court’s All Writs Act order to violate Apple’s First Amendment rights, it has to implicate First Amendment-protected speech. So is the code that Apple would be forced to write and sign really speech? To the extent that most users think about what applications are, they tend to think of them in functional terms: code causes the computer to do something.
But courts that have examined First Amendment protections for computer code have been clear that software has both functional and communicative elements. Just like musical scores, code communicates ideas, and it is interpretable by (some) people. To use the language of First Amendment law, code is “expressive.” And just like other forms of communication, code can be elegant or messy, terse or verbose, and so on.
It’s true that when speech has both expressive and non-expressive elements, government regulations aimed at the non-expressive aspects are subject to less strict scrutiny by courts. Here, you could argue that the government is merely asking Apple to achieve a functional result by writing and signing code, but we think that ignores the inextricably expressive elements of compelling Apple’s signature.
Apple’s signature conveys its strong endorsement of the signed code, what the Supreme Court has called “an affirmation of a belief.” Apple says it believes in strong security for its devices, and it has designed them to run only signed iOS code as a means to ensure this security. (You can certainly quibble with this as a means to enforced walled gardens, but it is a conscious choice.) When Apple signs code, it is conveying, among other things, that (1) the code originated with (or has been reviewed and approved by) Apple; (2) is authentic and has not been modified by a malicious third party; (3) and is safe to run on an Apple device.
Given this, forcing Apple to sign code it does not want to sign is clearly expressive, just as forcing parade organizers to include marchers they do not want injects an unwanted message into the parade. As we argue in the brief, the court’s order is “akin to the government dictating a letter endorsing its preferred position and forcing Apple to transcribe it and sign its unique and forgery-proof name at the bottom.”
Isn’t the “audience” for Apple’s speech just the phone? Or the government? And can’t Apple say it doesn’t “agree” with this speech?
Even if you accept that forcing Apple to write and sign code is compelled speech, you might think it’s a special case, since the only “audience” is the single iPhone 5c that the FBI wants to unlock. According to this argument, it might be different if Apple were forced to push weakened iOS updates to non-consenting users.
But that’s simply not how the First Amendment works. Forcing Apple to engage in speech that is only transmitted to the government, in private, is compelled speech nonetheless, just as the Supreme Court has held that is unconstitutional to force benefit seekers to sign a loyalty oath that only the government sees. And it’s irrelevant that Apple can spend as much money as it wants telling the world that it doesn’t “agree” with the signed code—the Supreme Court has likewise made clear that compelled speech is unconstitutional even when speakers can use other channels to disavow that speech.
What about cigarette labeling and highway safety mandates?
Wouldn’t this argument invalidate a whole range of government regulations—like mandatory labeling for cigarettes or nutritional information on food packaging? What about highway safety rules—bumpers, wheels, and the thousands of other arguably expressive choices that go into making a car that’s allowed on US roads? Wouldn’t companies be allowed to disobey any time a court’s order required them to engage in some speech—such as when a CEO has to order employees to shut down a factory that is illegally polluting?
As we’ve said, we think this is an especially egregious case that rises above lots of other hypotheticals that might incidentally involve compelled speech.
First, compelled speech doctrine has an exception for “purely factual and noncontroversial information” about commercial speech to prevent consumer deception. That’s the theory that has been used to require some cigarette labeling and other “purely factual” government-mandated labels. However, it should be emphasized that this is a narrow exception. (Even some cigarette labels—those that go beyond “purely factual, noncontroversial” information—have been struck down.)
Second, many safety regulations might indirectly require companies to make certain design decisions in order to comply. For example, since the late 80’s, cars have been required to include a “Liddy Light,” a center high-mounted taillight to improve visibility when stopping.
These regulations are distinguishable from requiring Apple to write and sign code, which is a compelled affirmation of belief. Safety regulations are aimed at the non-expressive elements of car designs—the government is making a judgment about necessary safety specifications, not their aesthetic or expressive content. Thus they are arguably subject to a less stringent form of constitutional scrutiny. By contrast, compelled writing and signing of code would require Apple to falsely endorse the government’s chosen version of iOS, which Apple believes to be detrimental to its users’ security. This is somewhat like requiring a food manufacturer to include an ingredient the company feels is unsafe for consumption and then label the package with the company’s “seal of quality.” Apple’s signed code is inherently expressive; indeed the government wants to force Apple to sign the code precisely because of the message the signature conveys.
Even if a certain safety regulation is considered a speech compulsion, it would not be automatically unconstitutional. Even under so-called strict scrutiny, the government can compel speech if it can show that the compelled speech is narrowly tailored to advancing a highly important public interest that cannot be addressed in any other way. With safety regulations, the government may be able to demonstrate the necessity of specific designs, particularly since they're part of comprehensive regulatory schemes. Similarly, forcing a CEO to order a factory to be shut down might compel some speech, but this rather incidental burden on the CEO’s First Amendment rights could well be justified in light of the government interest at stake. In Apple’s case, the government has not demonstrated any such necessity, nor that the All Writs Act order is narrowly tailored.
Of course, these aren’t the only questions in Apple’s fight, and free speech is just one of many reasons to oppose the government’s demands. But the First Amendment is an important protection for strong encryption, and we’ll rely on it as the new Crypto Wars roll on.
Pages
Subscribe to EFF Updates
Deeplinks Archives
Deeplinks Topics
- Fair Use and Intellectual Property: Defending the Balance
- Free Speech
- Innovation
- International
- Know Your Rights
- Privacy
- Trade Agreements and Digital Rights
- Security
- State-Sponsored Malware
- Abortion Reporting
- Analog Hole
- Anonymity
- Anti-Counterfeiting Trade Agreement
- Biometrics
- Bloggers' Rights
- Broadcast Flag
- Broadcasting Treaty
- CALEA
- Cell Tracking
- Coders' Rights Project
- Computer Fraud And Abuse Act Reform
- Content Blocking
- Copyright Trolls
- Council of Europe
- Cyber Security Legislation
- CyberSLAPP
- Defend Your Right to Repair!
- Development Agenda
- Digital Books
- Digital Radio
- Digital Video
- DMCA
- DMCA Rulemaking
- Do Not Track
- DRM
- E-Voting Rights
- EFF Europe
- Encrypting the Web
- Export Controls
- FAQs for Lodsys Targets
- File Sharing
- Fixing Copyright? The 2013-2016 Copyright Review Process
- FTAA
- Genetic Information Privacy
- Hollywood v. DVD
- How Patents Hinder Innovation (Graphic)
- ICANN
- International Privacy Standards
- Internet Governance Forum
- Law Enforcement Access
- Legislative Solutions for Patent Reform
- Locational Privacy
- Mandatory Data Retention
- Mandatory National IDs and Biometric Databases
- Mass Surveillance Technologies
- Medical Privacy
- National Security and Medical Information
- National Security Letters
- Net Neutrality
- No Downtime for Free Speech
- NSA Spying
- OECD
- Offline : Imprisoned Bloggers and Technologists
- Online Behavioral Tracking
- Open Access
- Open Wireless
- Patent Busting Project
- Patent Trolls
- Patents
- PATRIOT Act
- Pen Trap
- Policy Analysis
- Printers
- Public Health Reporting and Hospital Discharge Data
- Reading Accessibility
- Real ID
- RFID
- Search Engines
- Search Incident to Arrest
- Section 230 of the Communications Decency Act
- Social Networks
- SOPA/PIPA: Internet Blacklist Legislation
- Student and Community Organizing
- Student Privacy
- Stupid Patent of the Month
- Surveillance and Human Rights
- Surveillance Drones
- Terms Of (Ab)Use
- Test Your ISP
- The "Six Strikes" Copyright Surveillance Machine
- The Global Network Initiative
- The Law and Medical Privacy
- TPP's Copyright Trap
- Trans-Pacific Partnership Agreement
- Travel Screening
- TRIPS
- Trusted Computing
- Video Games
- Wikileaks
- WIPO
- Transparency
- Uncategorized