Authors: Colin MacArthur, David Greis, Scott Martin, Peter Nguyen, Deb Linton
Hot on the heals of Sunday’s Hugo Awards debacle, YouTube’s automated copyright takedown bot blocked Michelle Obama’s Democratic National Convention speech shortly after it aired on September 4, 2012.
According to a WIRED.com post, YouTube users attempting to view the speech instead were served this boilerplate copyright infringement notice:
This video contains content from WMG, SME, Associated Press (AP), UMG, Dow Jones, New York Times Digital, The Harry Fox Agency, Inc. (HFA), Warner Chappell, UMPG Publishing and EMI Music Publishing, one or more of whom have blocked it in your country on copyright grounds. Sorry about that.
In response to the growing number of DMCA takedown requests, sites like YouTube and uStream have created systems, or “bots,” that automatically identify and block copyrighted material. For the purposes of this blog post, we consider the following hypothetical question: What if Congress passed a law mandating the use of such a system by content platforms like YouTube? If the law was interpreted in the same way as it was in the cases we covered this week, would it be upheld if challenged?
The Supreme Court divides laws impacting first amendment freedoms into two categories: content-neutral and content-based. A law is content neutral if it wasn’t adopted because of “ [agreement or] disagreement with the message it conveys.” Ward v. Rock Against Racism, 491 U.S. 781, 790 (1989) Even though a bot created by our hypothetical legislation would necessarily consider the content of videos in order to determine whether it violated copyright, it wouldn’t censor based on whether the government agreed or disagreed with that content. Thus, it is reasonable to conclude the proposed legislation would be considered “content-neutral” in the eyes of the Court. To be deemed constitutional, our hypothetical law would therefore need to satisfy the four criteria for content-neutral legislation as outlined in Universal City Studios vs. Reimerdes and Ward vs. Rock. The rest of this blog post is devoted to a discussion of how our hypothetical legislation would or would not satisfy these criteria.
Does the legislation further a legitimate government interest? And is this interest unrelated to the suppression of free speech?
Previous cases (e.g. Ward vs. Rock and Universal City Studios vs. Remierdes) have accepted that copyright protection is a legitimate government interest. The hypothetical use of a government mandated copyright bot would further the government’s interest in copyright protection. Moreover, it is likely uncontroversial to assert that this legitimate government interest is unrelated to the suppression of free speech.
Would incidental restriction of first amendment freedoms resulting from this law be no greater than is essential to further the government’s legitimate interest?
In the context of our hypothetical legislation, ‘incidental restrictions’ potentially could be measured as incidents of erroneous censorship similar to the episode outlined in the Wired Magazine article above. Therefore, to determine whether the restrictions would be “no greater than is essential,” it is reasonable to ask whether alternative means exist that would result in fewer such episodes. Preliminarily, one might think, given the task of trawling the seemingly bottomless ocean of data on the Internet, that such work is only really feasible if done by a computer. As long as incidents of erroneous censorship by bots are within a reasonable statistical margin of error, these would satisfy the Court’s rules for content-neutral restrictions.
Would the government’s purpose be achieved less effectively without our hypothetical law?
Any system which does not automatically identify copyright-violating content would reach few violators. In other words, because anything but a bot would censor only a fraction of all copyright-infringing material, an alternative method would be less effective. Thus, this hypothetical law would arguably meet this particular criterion.