CARVIEW |
- Deeplinks Archives
- Blog Categories
- Analog Hole
- Announcement
- Anonymity
- Anti-Counterfeiting Trade Agreement
- Broadcast Flag
- Broadcasting Treaty
- CALEA
- Call To Action
- Cell Tracking
- Coders' Rights Project
- Commentary
- Development Agenda
- Digital Radio
- Digital Rights Management
- Digital Video
- DMCA
- DMCA Rulemaking
- E-Voting Rights
- EFF Europe
- EFF15
- File Sharing
- FOIA Litigation for Accountable Government
- Free Speech
- FTAA
- Innovation
- Intellectual Property
- International
- Legal Analysis
- Legislative Analysis
- miniLinks
- News Roundup
- News Update
- No Downtime for Free Speech Campaign
- NSA Spying
- Patents
- PATRIOT Act
- Printers
- Privacy
- Real ID
- Search Engines
- Technical Analysis
- Test Your ISP
- Transparency
- Travel Screening
- Trusted Computing
- WIPO
Deeplinks Blogs related to Innovation
MPAA Asks Obama for More Copyright Surveillance of the Internet
Legislative Analysis by Tim JonesAs part of their commitment to transparent and open government, the Obama Transition Team is posting the lobbying agendas of the groups it meets with for public review and comment. One of the more interesting documents to be found there is the Motion Picture Association of America's "international trade" agenda.
Some of the MPAA's agenda is reasonable, such as cracking down on commercial optical disc piracy. But much of it, if adopted, would result in a substantially less free and safe internet, at little or no actual benefit to the artists and workers the MPAA claims to represent.
Of course, this may not be immediately clear when reading the document, since it's all couched in DC lobbyist-speak. Here, then, is a guide to understanding what's really being talked about.
First:
"Achieving inter-industry cooperation in the fight against online piracy, including through automated detection and removal of infringing content is imperative to curb the theft of online content...
This kind of automated-detection technology has long been a favorite fantasy of the MPAA and affiliates. They've pushed for it on US campuses, in US states, in US trade law [PDF], and in Europe, so it's hardly surprising to see them pushing for country-wide requirements at the federal level.
The MPAA's faith in "filtering" is pure magical thinking. It presupposes invading the privacy of innocents and pirates alike by monitoring every packet on the Internet (which is bad enough when the NSA does it). And it ignores the reality of strong encryption, which will utterly defeat network filtering techniques (thus necessitating more intrusive alternatives — how about a copyright surveillance rootkit on every PC?). Sacrificing our privacy for the pipe-dreams of one industry is a bad idea.
These reasons and more were outlined by EFF in a 2005 white paper, and again last January in a memo to European lawmakers [PDF].
Next up:
"MPAA views recent efforts by the Governments of France and the United Kingdom to protect content on-line and facilitate inter-industry cooperation as useful models.
Here, the MPAA is advocating for a number of things, the most problematic of which is a "three strikes" internet termination policy. This would require ISPs to terminate customers' internet accounts upon a rights-holder's repeat allegation of copyright ingfringement. This could be done potentially without any due process or judicial review. A three-strikes policy was recently adopted by the French Senate, and may become the law if adopted by the French National Assembly next year.
Because three-strikes policies do not guarantee due process or judicial oversight of whether the accusations of copyright infringement are valid, they effectively grant the content industry the ability to exile any individual they want from the internet. Lest we forget, there is a history of innocents getting caught up in these anti-piracy dragnets. (Copyfighter Cory Doctorow has wondered what would happen if the MPAA's erroneous notices were subject to a similar three-strikes law.)
Thankfully, members of the European Parliament vehemently rejected these measures, resolving that "The cut of Internet access is a disproportionate measure regarding the objectives. It is a sanction with powerful effects, which could have profound repercussions in a society where access to the Internet is an imperative right for social inclusion." Let's hope the US government's decisions on this are as wise.
EFF outlined these concerns and more in our September 2008 comments to the US Trade Representative [PDF].
And, finally:
"MPAA has identified the following countries for priority trade policy attention in 2009: Canada, China, India, Mexico, Russia and Spain.
Translation: Not satisfied with wrecking the internet for US citizens alone, the MPAA would like the US government to pressure foreign governments to adopt the same harmful measures. This is made explicit by a look at, for instance, the International Intellectual Property Association's 2008 one-sheets on Canada [PDF] and Spain [PDF]: The MPAA wants these governments to institute mandatory internet filtering and three-strikes laws. Canada is being singled out by the MPAA because of its sensible rejection of the Canadian version of the US's deeply flawed Digital Millenium Copyright Act. In Spain, the MPAA is frustrated with rulings in 2006 that failed to punish Spanish citizens sufficiently harshly for file-sharing.
This week in the San Jose Mercury News, Ed Black, CEO of the Computer & Communications Industry Association, described how adoption of the MPAA's international trade demands would deeply set back US innovation and foreign policy.
How the Obama administration will react to these demands remains to be seen. The adoption of a Creative Commons license for Change.gov content indicates that there just might at long last be a seat at the table in the White House for smart thinking on copyright issues. Hopefully the Obama Administration will prove strong enough to stand up to the MPAA's lobbying, and instead institute positive reforms of US copyright law.
If you'd like to share your thoughts on this matter with the Obama Transition Team, the MPAA's agenda is open to public review and comment on Change.gov.
Updated Dec 15: The original post mistakenly indicated that France's three-strikes law had already gone into effect.
Internet Censors Must Be Accountable For The Things They Break
Deeplink by Peter EckersleyYesterday's scandal over the UK Internet Watch Foundation's attempt to censor a purportedly pedophiliac Wikipedia entry raises some important questions about unintended technical consequences of Internet censorship systems. The Wikipedia article was about a 1970s hard rock album called Virgin Killer [NSFW] by the German band Scorpions.
Censorship technologies are purveyed as a way to protect us from the evils of child abuse. But they're costly systems that are unlikely to actually protect anyone or prevent any child abuse — they're more likely to interfere with the way the Internet works and hamper innovation by online communities.
The Wikipedia entry for Virgin Killer contains an image of the album's cover, which depicts a nude teenage girl, with only minimal obscurement by a "shattered glass" effect. The article contains an encyclopedic discussion of how executives at RCA Records had decided on the cover concept, the controversy it had caused at the time, and how the woman depicted had no objections to the image's use, even in retrospect. Wikipedia had come under pressure from conservative Christian groups in the United States to remove the image, but through extensive debate Wikipedia editors agreed to keep the image with the article.
Last weekend, the Internet Watch Foundation (IWF), a coalition of UK ISPs, Internet companies and censorware vendors, listed the image as a "potentially illegal indecent image of a child hosted outside the UK". This caused a number of UK ISPs to begin routing all Wikipedia traffic through filtering proxy servers, which disallowed access not only to the cover image but the entire Virgin Killer article.
This is where the unintended consequences began. Some of the ISPs' proxy servers were unable to handle the volume of traffic Wikipedia attracts, making Wikipedia unavailable entirely. But there were also stranger side effects.
Wikipedia is an encyclopedia which allows anyone to edit an article by clicking an "edit" button, changing the text, and clicking "save". Because of the site's enormous visibility, there are thousands of people around the planet who are continuously trying to deface pages by editing them. Wikipedia responds to the most persistent attempts at vandalism by blacklisting IP addresses that are repeatedly the source of vandalistic edits. In some cases, users can get around the blocks if they have accounts at Wikipedia, but in severe cases, users must petition to have their account allowed through the block. Without these blocks, the encyclopedia would have long ago been reduced to gibberish.
What does this have to do with censorship by the IWF? It turns out that IWF had decided upon a method for censorship which was incompatible with Wikipedia's anti-vandalism architecture. The proxy servers that IWF member ISPs used to block the Virgin Killer article meant that Internet users at Britain's largest ISPs were suddenly sharing a handful of proxy servers. Many of those proxy servers were configured to make the users affected by them indistinguishable to Wikipedia. Enough of those indistinguishable users were vandalizing Wikipedia that site admins were forced to block all of them from editing.
That meant that huge numbers of ordinary British Internet users could no longer edit Wikipedia, because technical decisions by Internet censors suddenly caused them to be sharing IP addresses with a horde of vandals.
Today, the IWF reversed its previous ban on the Virgin Killer article, concluding that the intervention had been counterproductive because it was causing more people to see the allegedly pornographic image. We agree with their decision, but they have the wrong reasoning: they had no business censoring that article in the first place — the community of Wikipedia editors is if anything the more legitimate, reliable and grown-up adjudicator of which images are appropriate subject matter for an encyclopedia. And the block on the editing of Wikipedia by a large portion of the UK population just highlights the dangers of deleterious unintended consequences once we travel down the low road of Internet censorship.
The IWF's censorship failed doubly to be transparent. It is not transparent technologically, which leads inevitably to a conflict the end-to-end expectations of the rest of the Net. And its process of attempting to block and filter is far from transparent to those who are caught up in it: from puzzled and frustrated Wikipedia users to the millions of Britons who never realized they were paying their ISPs for a compulsorily and arbitrarily sanitized Internet.
Update, 10th of December: Mark Pellegrini, Wikimedia Foundation Communications Committee member, writes to point out that the IWF blacklist only blocked the Virgin Killer article and the image's description page; they did not block the actual URL of the controversial image itself.
Remixers, Unlockers, Jailbreakers, Oh My!
Legal Analysis by Fred von LohmannYesterday, EFF filed petitions (1, 2) with the Copyright Office seeking DMCA exemptions for three categories of activities that do not violate copyright laws, but that are still jeopardized by the DMCA's ban on bypassing technical protection measures used to control access to copyrighted works (i.e, DRM). The three exemptions are for:
- Noncommercial video creators (like YouTubers and vidders) who rip DVDs in order to use clips for fair use remixes;
- Cell phone owners who want to unlock their phones to use them on cellular networks of their choosing;
- Cell phone owners who want to "jailbreak" their phones in order to use applications of their choosing (e.g., iPhone owners who want apps from sources other than the iTunes App Store).
The exemption for remix video creators is necessary to protect fair use in a digital world where visual literacy (what Larry Lessig calls RW culture) is increasingly important. Today, if you rip a DVD, the MPAA takes the position that you've broken the law, even if you are making a video that comments on the latent racism in Disney films or the sexualized violence in 300. This is what free speech looks like in the 21st century, and a DMCA exemption is necessary if we want to avoid driving millions of amateur creators into the copyright underground.
The cell phone exemptions (unlocking and jailbreaking) are necessary to protect your "freedom to tinker" with products you own. Cellular carriers lock their phones not to protect their copyrights, but rather to discourage customers from switching carriers. This is not only anti-competitive, but puts millions of used cell phones into landfills each year. More recently, cell phone makers have started locking phones to a single source for applications -- which is why more than 350,000 iPhone owners have "jailbroken" their iPhones in order to get the apps they want, instead of just the ones Apple is willing to let them have.
Others are seeking exemptions for computer security researchers who want to investigate DRM on videogames (SecuROM, we're looking at you); documentarians, film professors, and media literacy educators who need to take clips from DVD; and consumers who have been left high and dry by vendors who retired their DRM authentication servers (e.g., Walmart, Yahoo, Microsoft). All of the proposals have been posted on the Copyright Office website. Comments supporting or opposing the proposed exemptions are due by Feb. 2, 2009. Hearings will follow in the Spring, and the Copyright Office will announce its final determinations in October 2009, as the last set of exemptions expire.
Apple Confuses Speech with a DMCA Violation
Legal Analysis by Fred von LohmannSlashdot reports that Apple has sent a "cease and desist" email to bluwiki, a public wiki site, demanding the removal of postings there by those who are trying to figure out how to write software that can sync media to the latest versions of the iPhone and iPod Touch.
Short answer: Apple doesn't have a DMCA leg to stand on.
At the heart of this is the iTunesDB file, the index that the iPod operating system uses to keep track of what playable media is on the device. Unless an application can write new data to this file, it won't be able to "sync" music or other content to an iPod. The iTunesDB file has never been encrypted and is relatively well understood. In iPods released after September 2007, however, Apple introduced a checksum hash to make it difficult for applications other than iTunes to write new data to the iTunesDB file, thereby hindering an iPod owner's ability to use alternative software (like gtkpod, Winamp, or Songbird) to manage the files on her iPod.
The original checksum hash was reverse engineered in less than 36 hours. Apple, however, has recently updated the hashing mechanism in the latest versions of the iPhone and iPod Touch. Those interested in using software other than iTunes to sync files to these new iPods will need to reverse engineer the hash again. Discussions about that process were posted to the public bluwiki site. Although it doesn't appear that the authors had yet figured out the new iTunesDB hashing mechanism, Apple's lawyers nevertheless sent a nastygram to the wiki administrator, who took down the pages in question.
Here are just a few of the fatal flaws in Apple's DMCA argument.
Where's the "technology, product, service, device or device"?
The DMCA provides that:
No person shall manufacture, import, offer to the public, provide, or otherwise traffic in any technology, product, service, device, component, or part thereof, that ... is primarily designed or produced for the purpose of circumventing protection afforded by a technological measure that effectively protects a right of a copyright owner....
The information posted on the wiki appeared to be text, along with some illustrative code. Nothing that I saw on the pages I was able to review would appear to constitute a "technology, product, service, device, component, or part thereof." In fact, the authors had apparently not yet succeeded in their reverse engineering efforts and were simply discussing Apple's code obfuscation techniques. If Apple is suggesting that the DMCA reaches people merely talking about technical protection measures, then they've got a serious First Amendment problem.
Who owns the copyrighted work?
The iTunesDB file is not authored by Apple, nor does it appear that Apple has any copyright interest in it. Instead, the iTunesDB file on every iPod is the result of the individual choices each iPod owner makes in deciding what music and other media to put on her iPod. In other words, the iTunesDB file is to iTunes as this blog post is to Safari -- when I use Safari to produce a new work, I own the copyright in the resulting file, not Apple.
So if the iTunesDB file is the copyrighted work being protected here, then the iPod owner has every right to circumvent the protection measure, since they own the copyright to the iTunesDB file on their own iPod.
Where's the access control?
The contents of the iTunesDB file is not protected at all -- any application can read it. So, as a result, the obfuscation and hashing mechanisms used by Apple to prevent people from writing to the file cannot qualify as "access controls" protected by Section 1201(a) of the DMCA.
Apple might argue that the checksum hash prevents people from preparing derivative works, which means that it's a "technological measure that effectively protects the right of a copyright owner" (as noted above, however, it's the user, not Apple, who owns any copyright in the iTunesDB file). The DMCA, however, does not prohibit circumvention of technical measures that are not access controls, although it does restrict trafficking in tools that circumvent these measures. But, as mentioned above, there are no "tools" on the bluwiki pages.
What about the reverse engineering exemption?
Apple's lawyers also appear to have overlooked the DMCA's reverse engineering exception, 17 U.S.C. 1201(f), which permits individuals to circumvent technological measures and distribute circumvention tools "for the purpose of enabling interoperability of an independently created computer program with other programs, if such means are necessary to achieve such interoperability, to the extent that doing so does not constitute [copyright] infringement."
Enabling iPods to interoperate with "independently created computer programs" (like gtkpod, Winamp, and Songbird) is precisely what the reverse engineering exception was intended to protect.
Where's the nexus to infringement?
Finally, Apple's DMCA theory fails because any "circumvention" that might be involved here has no connection to any potential copyright infringement. Two decisions by federal courts of appeal (1, 2) have held that without a nexus to potential infringement, there is no violation of the DMCA. And here, it's hard to see how reverse engineering the iTunesDB checksum hash can lead to any infringement of the iTunesDB file -- after all, the reverse engineers presumably aren't interested in making piratical copies of the iTunesDB file. Instead, they just want to sync their iPhones and iPods using software other than iTunes. No infringement there.
Of course, without more than the bare "cease and desist" emails sent by Apple's lawyers to bluwiki, we can't know for certain what other DMCA arguments they may have had in mind. But I certainly can't see any DMCA violation here based on Apple's nastygrams thus far.
Google is Done Paying Silicon Valley's Legal Bills
Commentary by Fred von Lohmann[I wrote the following op-ed, which appeared in the Nov. 14 issue of The Recorder. Because that publication's website is not publicly available, I'm posting a copy here, with their permission.]
For most of the decade, Silicon Valley technology startups have assumed that Google would pay their legal bills. Not literally, mind you, but rather by taking on the big, high-profile cases about fair use, interoperability, and other digital intellectual property issues that would set precedents that all disruptive innovators could rely on.
Well, Google just put the Valley on notice that the free ride is over, which means more legal burdens for smaller technology companies that previously depended on Google clearing a path for them.
Late last month, Google announced a settlement in its lawsuit with book publishers and authors over its Google Book Search offering. At the heart of the dispute is the question of whether scanning copyrighted books in order to index them violates copyright law, as the publishers argued, or is permissible as a fair use, as Google argued. If approved by the court, the $125 million settlement would buy Google — and only Google — permission not just to scan books for indexing purposes, but also to expand Book Search to provide more access to the scanned books.
The Book Search case is just one of a series of high-stakes lawsuits that Google has taken up in the name of the disruptive innovation that fuels the Internet economy. Others include the billion-dollar suit brought by Viacom over copyrighted video clips appearing on YouTube, as well as cases brought by trademark owners attacking Google's right to sell trademarks as keyword triggers for those "sponsored links" that appear when you use Google's search engine. Google has also fought copyright owners to defend its search engine, news aggregation, image search and Web caching activities.
Google, assisted by its expensive, top-drawer legal team, has a track record of winning these precedent-setting Internet cases. And by winning, Google sets a precedent that other innovators can rely on, as well. In essence, Google's legal investments have paid dividends for the entire Internet innovation economy.
Until now. By settling rather than taking the case all the way (many copyright experts thought Google had a good chance of winning), Google has solved its own copyright problem — but not anyone else's. Without a legal precedent about the copyright status of book scanning, future innovators are left to defend their own copyright lawsuits. In essence, Google has left its former copyright adversaries to maul any competitors that want to follow its lead.
Google will doubtless be considering the same endgame for the Viacom lawsuit against YouTube. If Google can strike a settlement with a large slice of the aggrieved copyright owners, then it solves the copyright problem for itself, while leaving it as a barrier to entry for YouTube's competitors.
But when innovators like Google cut individual deals, it weakens the Silicon Valley innovation ecology for everyone, because it leaves the smaller companies to carry on the fight against well-endowed opponents. Those kinds of cases threaten to yield bad legal precedents that tilt the rules against disruptive innovation generally.
For better or worse, it looks like tomorrow's cutting-edge Internet law precedents are going to be left to smaller companies to set. That means smaller startups (and their venture capital backers) need to start planning strategically to pick up the slack left by Google's gradual retreat from the field of battle. To put it bluntly, they need to set aside real money for litigation and find ways to cooperatively invest in the legal precedents that all of them collectively need.
Reproduced with permission from the Nov. 14, 2008 edition of The Recorder, copyright 2008 ALM Properties. Further reproduction without permission prohibited without permission of ALM Properties.
FCC Unanimously Approves Use of Television "White Spaces"
News Update by Richard EsguerraAdvocates for the opening of the "white spaces" were rewarded with a resounding victory earlier this month when the FCC unanimously voted in favor of allowing unlicensed use of the unused spectrum between TV channels. (For a more complete explanation of white spaces, check out our earlier blog post.) While FCC Chairman Kevin Martin had telegraphed his support for white spaces at the conclusion of technical trials, the landslide vote opens doors for innovation and is a victory for the public over the entrenched media incumbents.
However, it's important to consider the remaining variables in play. The end goal is better wireless broadband access in America -- more Internet, in more places, at lower cost. While innovators have been given a significant green light by the FCC through this vote, there are other milestones to be met and obstacles to overcome: a possible legal challenge from the broadcasters, full implementation of spectrum avoidance technology, and FCC certification of consumer-ready devices. These will all have an effect on the amount of time it takes for white space devices to reach consumers.
Regardless, the FCC's unanimous approval is a major win for the public. It's easy to imagine the FCC playing it safe and succumbing to the incumbent broadcasters instead, closing the gates on improved wireless technology at the outset. But the current Commission's commitment to innovation, its investment in researching the technology, and the efforts of public interest groups and regular folks speaking out made the difference and is paving the way for a better future in wireless broadband.
An Innovation Agenda for the New Administration
Legislative Analysis by Tim JonesThis is the second post in a three-part series outlining how the new leadership in Congress and the White House can restore some of the civil liberties we've lost over the past eight years. Today's post focuses on innovation, fair use and intellectual property. On Friday, we posted about privacy and surveillance, and tomorrow we'll discuss government transparency.
Today's intellectual property (IP) laws frequently fail to strike the proper balance between the rights of creators, copyright holders and the public. Powerful companies interested in maximizing their investments in intellectual property have run roughshod over the people's fair use rights. This has been especially problematic given the explosion of user generated content sites like YouTube, which celebrate creativity and innovation and actively encourage a remix culture. It is our hope that our government leaders will work to bring balance to the law. Here are some suggestions to get things started:
Repair the Digital Millennium Copyright Act (DMCA). Eliminate the ability of copyright holders to get statutory damages for noncommercial violations of copyright laws. Require proof of actual damages prior to any award based on copyright liability. Raise the requirements for content owners to receive preliminary injunctions against technologies in copyright cases. Congress should pass the FAIR USE Act and the Orphan Works Act.
Reform the U.S. Patent and Trademark Office (PTO), emphasizing its role to promote, rather than impede, innovation. Patents, by constitutional design, are supposed to "promote the Progress of Science and useful Arts." All too often today, patents are used to hold innovation hostage. Patent office procedures should be reviewed to ensure that patent examiners are being given the tools and incentives they need to challenge overbroad patent applications. Simultaneously, avenues for post-grant administrative review procedures should be broadened, ensuring that public interest groups can continue to raise post-grant challenges without restrictive time limitations on their participation.
Don't let the content industry use our government resources to pressure universities and others to participate in their intimidating peer-to-peer dragnet operations.
Show caution before regulating the use of technologies that limit consumer choice or consumer rights. In the United States and abroad, our government should advocate for policies that promote the ability of consumers to use technology they purchase however they choose.
Google Book Search Settlement: A Reader's Guide
Legal Analysis by Fred von LohmannAs we reported earlier this week, Google has settled the lawsuit brought in 2005 by authors and book publishers regarding its massive book scanning and indexing project. Although the settlement must still be approved by the court and is unlikely to go into effect until sometime late in 2009, commentary has already been flooding the blogosphere. Generally, opinions are split between excitement for users ("better access to zillions of out-of-print books") and suspicion of Google ("one library to rule them all, and in the darkness bind them").
We are still digesting the ~300-page proposed settlement agreement (for those seeking a good overview, the 39-page notice to class members is a good place to start).
So far, two things are plain.
First, this agreement is likely to change forever the way that we find and browse for books, particularly out-of-print books. Google has already scanned more than 7 million books, and plans to scan millions more. This agreement will allow Google to get close to its original goal of including all of those books into Google's search results (publishers got some concessions, however, for in-print books). In addition to search, scanned public domain books will be available for free PDF download (as they are today). But the agreement goes beyond Google's Book Search by permitting access, as well. Unless authors specifically opt out, books that are out-of-print but still copyrighted will be available for "preview" (a few pages) for free, and for full access for a fee. In-print books will be available for access only if rightsholders affirmatively opt in. The upshot: Google users will have an unprecedented ability to search (for free) and access (for a fee) books that formerly lived only in university libraries.
Second, this outcome is plainly second-best from the point of view of those who believe Google would have won the fair use question at the heart of the case. A legal ruling that scanning books to provide indexing and search is a fair use would have benefited the public by setting a precedent on which everyone could rely, thus limiting publishers' control over the activities of future book scanners. In contrast, only Google gets to rely on this settlement agreement, and the agreement embodies many concessions that a fair user shouldn't have to make.
But the settlement has one distinct advantage over a litigation victory: it's much, much faster. A complete victory for Google in this case was probably years away. More importantly, a victory would only have given the green light for scanning in order to index and provide snippets in search results; it would not have provided clear answers for all the other activities addressed in the settlement, such as providing display access for out-of-print books, allowing nondisplay research on the corpus, and providing access for libraries. Litigating all of those fair use questions could easily have taken a decade or more. As University of Michigan head librarian Paul Courant points out, those are years that we would never get back. (University of Virginia's Prof. Siva Vaidhyanathan offers a differing view: "These claims are not convincing when one considers just how great an alternative system could be, if everyone would just mount a long-term, global campaign for it rather than settle for the quick fix.").
Conclusions beyond those two are harder to draw. Many devils are buried in the details of the 300-pages of legalese, and much will turn on how the agreement is implemented. Here are the 6 "big picture" concerns that I'm keeping in mind as I review those details:
Fair Use: How will this agreement impact future fair use cases involving book scanning? Others (like the Open Content Alliance) are scanning books, and they may not have Google's ability (or budget) to strike a deal with the world's publishers. UCLA Law's Prof. Neal Netanel has a few preliminary thoughts along this line at the Balkinization blog.
Innovation: It seems likely that the "nondisplay uses" of Google's scanned corpus of text will end up being far more important than anything else in the agreement. Imagine the kinds of things that data mining all the world's books might let Google's engineers build: automated translation, optical character recognition, voice recognition algorithms. And those are just the things we can think of today. Under the agreement, Google has unrestricted, royalty-free access to this corpus. The agreement gives libraries their own copy of the corpus, and allows them to make it available to "certified" researchers for "nonconsumptive" research, but will that be enough?
Competition: In the words of Prof. Michael Madison, "Has Google backed away from an interesting and socially constructive fair use fight in order to secure market power for itself?" Does this deal give Google an unfair head start against any second-comers to book scanning? The agreement creates an independent, nonprofit Book Rights Registry to dole out Google's royalties, and the parties clearly hope that the Registry will be able to license others on similar terms. But the Registry is empowered to cut a deal with Google on behalf of all rightsholders by virtue of the class action; in order to offer similar blanket licenses to others, it would have to independently acquire rights from each and every copyright owner individually. How long will that take? What about the Registry itself? It hopes to be a monopoly that fixes prices for the entire market of copyright owners -- precisely the kind of thing that landed ASCAP and BMI, which dole out blanket licenses for music, in antitrust trouble decades ago.
Access: This agreement promises unprecedented access to copyrighted books. But by settling for this amount of access, has Google made it effectively impossible to get more and better access? The agreement allows you to "purchase" digital access for out-of-print books, but does not include the right to download the book (unlike public domain books). So you can read the book, but only on Google's terms. Libraries get more access, but for an undisclosed price (OK, one computer for free) and still with a variety of restrictions. In the words of Harvard's head librarian, "As we understand it, the settlement contains too many potential limitations on access to and use of the books by members of the higher education community and by patrons of public libraries."
Public Domain: Early reports are that public domain materials are not regulated by the agreement. Moreover, Google has negotiated a "safe harbor" that protects it from liability for mistakes in evaluating the copyright status of a book. That should result in more willingness to forge ahead with the free PDF posting of books published between 1923-1963, where a public domain determination turns on checking government records to see whether the copyright had been renewed. But will Google impose restrictions on these "safe harbor" public domain works? Will the libraries that receive a digital copy of their own public domain holdings impose restrictions on those copies?
Privacy: The agreement apparently envisions a world where Google keeps all of the electronic books that you "purchase" on an "electronic shelf" for you. In other words, in order to read the books you've paid for, you have to log into Google. Google is also likely to keep track of which books you browse (at least if you're logged in). This is a huge change in the privacy we traditionally enjoy in libraries and bookstores, where nobody writes down "Fred von Lohmann entered the store at 19:42:08 and spent 2.2 minutes on page 28 of 0-486-66980-7, 3.1 minutes on page 29, and 2.8 minutes on page 30." If Google becomes the default place to search, browse, and buy books, it will be able to keep unprecedented track of what you read, how you read it, and collate that with all the other information it has about you. Does the agreement contain ironclad protections for user privacy?
Federal Circuit Reins In Business Method Patents
Legal Analysis by Corynne McSherryThe Court of Appeals for the Federal Circuit yesterday issued a decision that imposes firm limits on business method patents. The ruling effectively overturns a key part of the court’s decision in State Street Bank and Trust v. Signature Financial Group, which opened the door to an explosion of patents on "methods" of doing business so long as the methods involved use of a computer and produced a "useful, concrete, and tangible result."
Bilski applied for a patent on a method of managing the risk of bad weather through commodities trading. Upholding the Patent Office’s rejection of Bilski’s application, the Federal Circuit held (in line with Supreme Court precedent) that processes can be patented only if they are implemented by a machine or transformed something into a new or different thing. The court found that Bilski’s method was not patentable because “transformations or manipulations of…business risks, or other such abstractions cannot meet the test because they are not physical objects or substances….” The court affirmed that business methods are still patentable, but explicitly rejected State Street’s “useful, concrete, and tangible result” test, which many believed had cleared the way for improper patents on fundamental principles and everyday activities that had no connection to technological innovation:
[W]hile looking for "a useful, concrete and tangible result" may in many instances provide useful indications of whether a claim is drawn to a fundamental principle or a practical application of such a principle, that inquiry is insufficient to determine whether a claim is patent-eligible under § 101. And it was certainly never intended to supplant the Supreme Court's test. Therefore, we also conclude that the "useful, concrete and tangible result" inquiry is inadequate and reaffirm that the machine-or-transformation test outlined by the Supreme Court is the proper test to apply.
EFF submitted an amicus brief (in conjunction with The Samuelson Law, Technology & Public Policy Clinic at UC Berkeley Law, Public Knowledge, and Consumers Union) supporting the rejection of Bilski's patent application.
DMCA: Ten Years of Unintended Consequences
Commentary by Fred von LohmannToday is the tenth anniversary of the Digital Millennium Copyright Act (DMCA), signed into law by President Bill Clinton on October 28, 1998. EFF is marking the occasion with the release of a 19-page report that focuses on the most notorious part of the law: the ban on "circumventing" digital rights management (DRM) and other "technological protection measures." The report, entitled Unintended Consequences: Ten Years Under the DMCA, collects reported cases where the DMCA was used not against copyright infringers, but instead against consumers, scientists and legitimate competitors.
The collected stories are like a trip down memory lane for those who have followed digital freedom issues over the past decade. Here are a few examples of DMCA abuse in the report that you might remember:
- In 1999, Sony sues Connectix over the Virtual Game Station, which let you play your legit Playstation games on your Macintosh.
- In 2001, the Secure Digital Music Initiative (SDMI) threatens Princeton Professor Ed Felten's research team over disclosure of vulnerabilities in audio watermarking technology.
- In 2001, Russian programmer Dmitry Sklyarov is arrested after speaking at Defcon, accused of building software for his employer, ElcomSoft, that converted Adobe e-books to PDF.
- In 2002, Blizzard sues a group of hobbyist open source developers over bnetd, server software that allows people to play Blizzard games against each other over the Internet.
- In 2003, Lexmark uses the DMCA to block distribution of chips that allow refilling of laser toner cartridges.
- In 2004, Hollywood succeeds in shutting down 321 Studios' DVD X Copy software, which allowed people to make backup copies of their own DVDs.
- In 2006, computer security researchers at Princeton delay disclosure of the Sony-BMG "rootkit" based on fears of DMCA liability.
- In 2008, Hollywood targets Real Networks over RealDVD, software that allows you to copy DVDs to a hard drive for later viewing.
The collection of stories makes vividly clear what EFF has been saying for the past ten years: the DMCA has harmed fair use, free speech, scientific research, and legitimate competition.
That's all the more galling because the law has failed in its stated goal of preventing digital piracy, instead being used to prop up weak DRM schemes whose only purpose is to hinder competition, innovation, and interoperability. That explains why the music industry has largely abandoned DRM, while the Hollywood studios cling to it more fervently than ever.
Not everything in the DMCA is bad. While the anti-circumvention provisions have proven to be a dangerous failure, the so-called "safe harbor" provisions for online service providers have succeeded in creating enough legal certainty to launch companies like Yahoo, Google, eBay, YouTube, and MySpace. Of course, copyright owners have been working hard in cases like Viacom v. YouTube and Io v. Veoh to erode these safe harbors. And, while the safe harbors have protected intermediaries like Google, they have not adequately protected the free speech interests of internet users, as the McCain-Palin campaign recently learned.
There have been recent rumors that the new Congress might reopen the DMCA, creating an opportunity for reform. Unfortunately, that may also create an opportunity for MPAA and RIAA mischief. For now, here's hoping that the DRM continues its slow death and the anti-circumvention provisions become less relevant to real businesses, while the courts continue to interpret the safe harbors to leave a door open to the Internet's disruptive innovators.
P.S. For more perspectives on the DMCA's origins and legacy during this 10 year anniversary week, see Freedom to Tinker and the Public Knowledge blog all this week.