Unattended Consequences of OECD Privacy Protection Guidelines

The OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data was created in 1980 to prevent economic barriers to international business when dealing with personal information.  The proliferation of disparate data privacy policies among nations threatened to hinder the efficiency at which nations interact as they each try to respect their own domestic laws regarding data privacy.  More than 30 years later we are still struggling to implement these guidelines as they have shown to have various unattended consequences.

First, the intention of the guidelines to improve economic exchange can be seen with the recent passage of the Personal Data Protection Bill in Singapore.  The law “is expected to strengthen Singapore’s reputation as a business hub, especially for transactions with European countries, which require more stringent data protection measures.”  However, it also touches on the consequences of how opening up to one market can close access to others.  For if the standards are too restrictive, countries with less restrictive standards could prevent them from doing business with Singapore.

This is indeed occurring with India and its relationship with the EU.  India is pushing for data secure status because their growth in the outsourcing sector is being hampered.  Some countries, like India, do not have the same resources as those in the OECD to implement the required guidelines or to do so in a reasonable amount of time.  So the very framework created to prevent a hindrance on economic trade is creating its own barriers as countries perform a balancing act dealing with nations at different stages and abilities of compliance.  India is further complicating matters by using their leverage in ongoing Free Trade talks to gain data secure status, blocking economic exchanges unrelated to data privacy.

The Singapore Law also reflects the issue of poor self-regulation as it only proposes forming an enforcement committee.  Part four of the guidelines just encourages self-regulation.  Adding an explicit requirement for a regulating framework would increase trust among nations and provide a stronger incentive for domestic companies to protect data.  Even a minor act of negligence can result in serious consequences in this age of digital information.  This was recently shown in the UK when the Scottish Borders Council was caught violating the Security Safeguards Principle by failing to monitor a third party company it was outsourcing its data to [Link].

More dire consequences are prevalent beyond just the implementation of the guidelines to the guidelines themselves. For instance, the British Bankers’ Association has found itself at odds with the Security Safeguards Principle when trying to comply with the Participation Principle.  In doing so they have set a precedent for denying rights afforded by the guidelines.  In this case it was denying responding to data subject requests by email as they justifiably claim it is not a secure communication medium.

A more obvious example where the guidelines are setup for abuse is in the carte blanche offered to the law when specifying exceptions to the Use Limitation Principle.  We are currently seeing the effects of this since the passage of the FISA Amendments Act, which is up for renewal this year.  The Act, among other things, “allows the government to electronically eavesdrop on Americans’ phone calls and e-mails without a probable-cause warrant.”  This guideline exception allows for the violations of the Openness and Collection Limitation Principles.  Although it is understandable for the law to violate the guidelines within certain contexts, without any mention of narrowly tailoring this power, the goals of the guidelines are too easily compromised.

Perhaps it is time to revisit the guidelines with remedying these consequences in mind.

Feral Copyright Bots Strike Again

Authors: Colin MacArthur, David Greis, Scott Martin, Peter Nguyen, Deb Linton

Hot on the heals of Sunday’s Hugo Awards debacle, YouTube’s automated copyright takedown bot blocked Michelle Obama’s Democratic National Convention speech shortly after it aired on September 4, 2012.

According to a WIRED.com post, YouTube users attempting to view the speech instead were served this boilerplate copyright infringement notice:

This video contains content from WMG, SME, Associated Press (AP), UMG, Dow Jones, New York Times Digital, The Harry Fox Agency, Inc. (HFA), Warner Chappell, UMPG Publishing and EMI Music Publishing, one or more of whom have blocked it in your country on copyright grounds. Sorry about that.

In response to the growing number of DMCA takedown requests, sites like YouTube and uStream have created systems, or “bots,” that automatically identify and block copyrighted material. For the purposes of this blog post, we consider the following hypothetical question: What if Congress passed a law mandating the use of such a system by content platforms like YouTube? If the law was interpreted in the same way as it was in the cases we covered this week, would it be upheld if challenged?

The Supreme Court divides laws impacting first amendment freedoms into two categories: content-neutral and content-based. A law is content neutral if it wasn’t adopted because of “ [agreement or] disagreement with the message it conveys.” Ward v. Rock Against Racism, 491 U.S. 781, 790 (1989)  Even though a bot created by our hypothetical legislation would necessarily consider the content of videos in order to determine whether it violated copyright, it wouldn’t censor based on whether the government agreed or disagreed with that content. Thus, it is reasonable to conclude the proposed legislation would be considered “content-neutral” in the eyes of the Court. To be deemed constitutional, our hypothetical law would therefore need to satisfy the four criteria for content-neutral legislation as outlined in Universal City Studios vs. Reimerdes and Ward vs. Rock. The rest of this blog post is devoted to a discussion of how our hypothetical legislation would or would not satisfy these criteria.

Does the legislation further a legitimate government interest? And is this interest unrelated to the suppression of free speech?

Previous cases (e.g. Ward vs. Rock and Universal City Studios vs. Remierdes) have accepted that copyright protection is a legitimate government interest. The hypothetical use of a government mandated copyright bot would further the government’s interest in copyright protection. Moreover, it is likely uncontroversial to assert that this legitimate government interest is unrelated to the suppression of free speech.

Would incidental restriction of first amendment freedoms resulting from this law be no greater than is essential to further the government’s legitimate interest?

In the context of our hypothetical legislation, ‘incidental restrictions’ potentially could be measured as incidents of erroneous censorship similar to the episode outlined in the Wired Magazine article above. Therefore, to determine whether the restrictions would be “no greater than is essential,” it is reasonable to ask whether alternative means exist that would result in fewer such episodes. Preliminarily, one might think, given the task of trawling the seemingly bottomless ocean of data on the Internet, that such work is only really feasible if done by a computer. As long as incidents of erroneous censorship by bots are within a reasonable statistical margin of error, these would satisfy the Court’s rules for content-neutral restrictions.

Would the government’s purpose be achieved less effectively without our hypothetical law?

Any system which does not automatically identify copyright-violating content would reach few violators. In other words, because anything but a bot would censor only a fraction of all copyright-infringing material, an alternative method would be less effective. Thus, this hypothetical law would arguably meet this particular criterion.

First Amendment Rights in Interactive Media: Targeted Advertising and Privacy Concerns

Hulu.com, an online video provider, is currently facing a lawsuit regarding personalized advertisements based on user data. This is relevant to First Amendment rights in interactive media and government controls on information, as vague laws enacted for one purpose may have later unintended effects. One could even argue that parts of intellectual property law itself are unconstitutional as it bridges freedoms of speech, especially in today’s new media where its enforcement is performed by inflexible automated agents (a recent example is the live broadcast of the Hugo Awards being shut down by copyright infringement software mid-through the ceremonies because it showed clips of the TV nominees).

It is important to understand that the laws put in place will affect the future of technology and human activities associated with it. The Hulu lawsuit raises questions about how much users are and should be aware of the inner workings of corporations, and how much (or how little) the government should intervene on behalf of its people. Furthermore, problems arise from laws governing technology with respect to innovation; it is important to ensure that the Internet is as decentralized as possible, allowing content providers freedom of distribution, and allowing users to make educated choices about what information enters their homes.

If we strive towards an open and decentralized Internet, we need to consider what constitutes “open access”, and who is ultimately responsible for upholding it. Furthermore, if the Internet is considered fundamental to the protection and proliferation of “speakers”, does it mean that the government has a responsibility to manage the value chain from content creation, to content distribution, to content consumption? We should also consider the opinions of users, and how their choices can be better reflected. Some Hulu users may enjoy the benefits of targeted advertising, while others may prefer to keep their data anonymous or not accessible.

ACLU v. Mukasey was concerned with the possibility that vague, overbroad laws targeted at a small population (the adult entertainment industry) would have potential applicability across a much broader swath of organizations and the court sought a solution that was most effective with least restrictions. The lawsuit filed against Hulu is subject to the alternative effect: a 1988 law intended to protect a specific case (the privacy of video rental records) is now impacting a modern online content distribution system. Intervention from a prior medium affects Hulu’s ability to evolve its business model and the services it provides to its clients (both viewers and advertisers).

Specificity in law can help to avoid confusion in such cases. In ACLU v. Mukasey, we’ve seen that one of COPA’s core problems was that it was not “narrowly tailored”. We can certainly find similarly vague language in The Video Privacy Protection Act that allows it to apply to Hulu now:

“(4) the term ‘video tape service provider’ means any person, engaged in the business, in or affecting interstate or foreign commerce, or rental, sale, or delivery of prerecorded video cassette tapes or similar audio visual materials, or any person or other entity to whom a disclosure is made under subparagraph (D) or (E) of subsection (b)(2), but only with respect to the information contained in the disclosure.”

It also raises many questions about the role of the law itself. Hulu’s users technically agree to the “Terms of Service”, including the collection of behavioral data–but when was the last time we read all of the text before clicking on “Agree” to use an online service? We can’t recall. If users should be aware, how far should the government go to protect them? If the court found that wide availability of filters was sufficient in the case of Child Online Protection Act (COPA), would it suffice for Hulu to provide clear ways to opt out?

While opting out may be a less restrictive solution, the Hulu case still begs the question of whether or not a law that is decades old (almost ancient history by today’s technological standards!) applies to new technology generations later. If the answer is yes, it could potentially start a snowball of similar legal issues for other Internet companies, similar to how the potential impact of COPA was perceived by the courts. Timeframes are also important with respect to the pace of technology development and the current multi-year appeal process, as the speed of change can significantly impact assumptions. We need look no further than COPA’s 11-year lifespan: it was passed in 1998, went through years of appeals and never took effect, then was completely struck down in 2009.

Needless to say, too much regulation could easily wreak disastrous effects on innovation and product development. If companies and other legal entities are penalized for violations of laws which were written in the past, user experience could become severely limited, and the ability to develop new products or start new companies becomes more restricted. Ultimately this has an impact on the quality of interactive media as a whole–the question is when and if consumers are willing to pay the price for mandated protection.