Consequences of Primary Use Design in the Case of MegaUpload


Created in 2005 to enable online file storing and sharing, MegaUpload was one of the most popular “digital storage lockers” of its time.  The site allowed users to upload any content, including videos and music, and then generated links for users to easily view and share their uploaded content.  At its peak, MegaUpload was the 13th most visited website on the Internet, with over 50 million page views per day and 180 million registered users (Tamlin).  In 2012, though, the US Department of Justice shut down the website, alleging it illegally harbored and promoted the dissemination of copyrighted content.  The shut down initiated what would become a controversial debate about online file sharing and offers us an interesting case study to examine how design impacts the perceived primary use and liability of a technology.


In the original January 2012 indictment, the DOJ brought several allegations against MegaUpload, including racketeering, conspiracy to commit copyright infringement, conspiracy to commit money laundering, and two counts of criminal copyright infringement.

Most relevant to this week’s course readings, the DOJ’s copyright infringement accused MegaUpload of using electronic means to “willfully, and for purposes of private financial gain, infringe copyrights” (Megavideo Indictment).  This wording of this allegation indicates that the DOJ was accusing MegaUpload of offenses similar to those in 17 US Code 1201 that disallow the promotion of technologies that circumvent copyright protection services.

Furthermore, the inclusion of these two phrases “willfully” and “financial gain” indicate that the charges brought against MegaUpload would be classified as secondary copyright liability infringements as they do not directly violate copyright but knowingly promote it for financial gain.  Awareness and motivation were essential considerations for the DOJ to address in order to demonstrate that MegaUpload ought to be regulated because they speak to the founders’ original intention for the site.

As this week’s reading have demonstrated, the “primary use” of technology determines its ability to be restricted.  Under Sony vs Universal, a technology need only be capable of “substantial noninfringing uses” to avoid regulation.  If a technology, however, is “primarily useful” for illicit purposes – as indicated in 18 US Code § 2512 – and lacks these “substantial noninfringing uses” it may be regulated under law.

Although the understanding of “primarily useful” may be debatable, the case of US vs The Spy Factory from this week’s readings persuasively supports that the phrase is clear enough to serve as the evaluation standard for MegaUpload and other technologies.  The Spy Factory case also clarifies that “in evaluating an object’s ‘primary use,’ the Supreme Court has emphasized the objective determination of a device’s most probable use.”  In other words, we ought to define the MegaUpload’s primary use as how most users would use the device rather how one user might use it.

Thus, in order to evaluate whether MegaUpload should be held accountable for encouraging copyright infringement, we would need to determine whether the site’s “primary use” was indeed to encourage illegal copyright infringement or whether the “substantial noninfringing uses” of the site outweighed that illegal use.  To determine this “primary use,” we turn to MegaUpload’s design.


17 US Code § 1201 clearly establishes that in assessing the “primary use” of a technology we should base our decision on what the device is “primarily designed or produced for the purpose of.”  Design then can, indeed, be used as a principal measure of a technology’s “primary use.”

FTC v Frostwire demonstrated this design basis for “primary use” assessment by examining in particular the user interface design of a technology.  Design, however, can be gleaned from additional design decisions, including information architecture and marketing choices.  We turn to these design factors in analyzing MegaUpload’s “primary use.”

One of the bases of allegations against MegaUpload arises from its information architecture decisions on how the site stores its content:

In practice, the “vast majority” of users do not have any significant long term private storage capability. Continued storage is dependent upon regular downloads of the file occurring. Files that are infrequently accessed are rapidly removed in most cases, whereas popular downloaded files are retained. (items 7–8, MegaUpload Indictment)

As indicated by this allegation, MegaUpload designed its site to store files temporarily and placed higher importance on files with more downloads.  While this architecture design could be seen as inconsequential, it clearly arises from MegaUpload’s desire to monetize its site.  Because this financially motivated implementation is in line with storing copyrighted files that command high viewership and that frequently get removed due to takedown notices, it could be argued that storing illegal copyrighted material is one of MegaUpload’s primary intended uses.

Another clue to MegaUpload’s primary use can be found in its marketing design.  As established in US vs The Spy Factory, how a company chooses to promote itself (eg choosing to call itself “The Spy Factory”) directly impacts its perceived primary use and liability. MegaUpload promoted itself as a tool for circumventing copyright:

§  They have “instructed individual users how to locate links to infringing content on the Mega Sites … [and] … have also shared with each other comments from Mega Site users demonstrating that they have used or are attempting to use the Mega Sites to get infringing copies of copyrighted content.” (item 13, MegaUpload Indictment)

By promoting or refusing to restrict this illegal use of its site, MegaUpload effectively embraced it and classified itself as a website with the intended use of sharing illegal copyrighted material.

MegaUpload also displayed a wide array of additional design decisions that can be found in the indictment to support the allegation of its “primary use.”


The allegations brought against MegaUpload have already had a significant impact on the liability concerns of other file-sharing sites.  In light of the DOJ’s accusations against MegaUpload, several file-sharing sites including,,, and have taken steps to limit their liability by limiting link sharing and banning third party downloads.  Some sites, like,, BTJunkie, and QuickSilverScreen, have even gone so far as to block US IP addresses or shutting down entirely.  It would seem then that DOJ’s copyright infringement regulation may be encouraging the creation of an “environment  of  tremendous  fear  and  uncertainty  for technological  innovation” as alluded to in this week’s “Open Letter from Internet Engineers.”

Other sites, however, like the large MediaFire site seem undisturbed.  As MediaFire CEO Derek Labian explains, he considered MegaUpload an extreme case of a business “built on copyright infringement” (Sean).  In other words, the designed purpose of MegaUpload made it in particular appropriately subject to regulation rather than file-sharing companies at large.

The future of file-sharing sites is yet to be determined.  What is clear though from the MegaUpload case study is that first and foremost, as website engineers and designers, I School students like us must be keenly aware of the design decisions we make.  The small choices we make may make us liable for privacy or copyright violations.

On a more ethical level, we also must take an active role in determining ethical design standards to hold our professional community ethically accountable, as the “Open Letter” engineers did.  Otherwise we will simply allow reactive governmental forces to heavy-handedly regulate our creative expression.


Tamlin, Bason. “DOJ Adds Wire Fraud, More Criminal Infringement Counts Against Megaupload.” Patent, Trademark & Copyright Law Daily, 22 Feb 2012. Web. 18 Mar 2014. <>.

Sean, Ludwig. “MediaFire CEO: Unlike Megaupload, our business model isn’t built on piracy.” VentureBeat. VentureBeat, 22 Jan 2012. Web. 18 Mar 2014. <>.

MegaVideo Indictment.  16 February 2012. <>

by Becca Stanger, Tim Meyers

BitTorrent and Copyright Infringement Liability Issues

What is BitTorrent?

BitTorrent refers to both a protocol and the company that developed and maintains the protocol. Initially developed in the early 2000s, the BitTorrent protocol is the most popular method for peer-to-peer file sharing. While estimates vary, BitTorrent Inc. claims that “Our protocols move as much as 40% of the world’s Internet traffic on a daily basis”, and it has over 150 million monthly users.

The BitTorrent protocol has been widely adopted because of its efficient method for sharing large files. Files are not shared in a one-to-one server-client relationship, but rather split into multiple pieces. Metadata for a file is stored in a separate torrent file, and other users with that torrent descriptor can use their BitTorrent client software to connect to any peers on the network (called a “swarm” in BitTorrent terminology) who share the descriptor and therefore have all or part of the file. As such, each user can both “leech” (download) and “seed” (upload) different pieces of the same file from a distributed set of peers. Security is provided by a cryptographic hash in the descriptor to allow for detection of modification of any individual piece of the original file.

BitTorrent Inc. not only maintains the protocol but also provides desktop software that serves as the protocol client, although there are a variety of other open- and closed-source clients available. Additionally, several companies have entered the market to act as indices and search engines for torrent descriptor files; the most popular and famous among these is The Pirate Bay. Thus, a user may search for a copyrighted movie through The Pirate Bay and use the BitTorrent protocol through an open-source client to download hundreds of pieces of a file from hundreds of users around the world.

BitTorrent’s liability for secondary copyright infringement

A recent ruling in March 2013 by the Ninth Circuit upheld the liability of a BitTorrent indexing website for violating secondary copyright infringement. The case was bought by Columbia Pictures Studios against Gary Fung, whose company owned several websites (,,, and that index and direct users to BitTorrent files that provide access to copyrighted content. The district court held that Fung and his company were secondarily liable for copyright infringement, and that Fung could not seek protection under ‘safe harbors’ of the DMCA due to his intentional infringement of copyrighted material.

Fung was liable for secondary copyright infringement because he induced users to violate copyright by encouraging them to upload torrents for copyrighted content, personally providing assistance to upload, locate, and assemble torrents for download, and posting featured list of torrents for copyrighted films on his website. This triggered the inducement liability ordered by the Supreme Court in MGM vs Grokster:

“[O]ne who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, is liable for the resulting acts of infringement by third parties.” 545 U.S. at 936-37

Furthermore, Fung failed to use available filtering mechanisms to prevent infringement on his website and generated revenues almost exclusively through advertisements and the increased traffic on his website due to the availability of copyrighted content. The court ruled that Fung could not avail himself of the safe harbors for storage or for information location tools because of both his active encouragement of infringement activity on his website and the financial benefit he received from the infringement activity, despite having the ability to implement filters to prevent the infringement. These two reasons disqualified Fung from DMCA protections.

The Ninth Circuit held, “if one provides a service that could be used to infringe copyrights, with the manifested intent that the service actually be used in that manner, that person is liable for the infringement that occurs through the use of the service”. As such, the Grokster case remains a viable theory for holding defendants accountable.

Other copyright issues around BitTorrent and file sharing

While some copyright holders have pursued legal action against file-sharing software creators for secondary infringement, the Recording Industry Association of America (RIAA) also sued over 35,000 individuals for primary copyright infringement via file-sharing between 2003 and 2008. This was an expensive endeavor that ultimately did not generate big legal wins for the music industry due to their reliance on a tenuous “making available” theory of infringement, whereby the RIAA claimed individuals violated copyright simply by making copyrighted material available to others using file-sharing software. Several federal courts ultimately rejected this claim, most emphatically in the Arizona case Atlantic v. Howell, where the judge ruled that infringement of copyright-holders’ distribution rights requires actual distribution of copies of the work.

In late 2008 the RIAA changed its strategy and essentially stopped prosecuting users of file-sharing networks for primary copyright infringement. Instead, it partnered with Internet service providers (ISPs) to propose a “three strikes” program in which ISPs would warn customers accused of infringement up to three times before ultimately cutting off their Internet service. This system took years to develop and was ultimately implemented in early 2013 as a “six strikes” system enforced by five of the largest ISPs in conjunction with the RIAA, the Motion Picture Association of America (MPAA), and other copyright holders, under a new umbrella organization called the Center for Copyright Information (CCI). Alleged infringers may challenge a CCI Copyright Alert in arbitration by claiming one of a few preselected appeal grounds, including fair use.

No court has yet ruled on the legality of this new regime, but it raises some important legal and policy questions in the realm of copyright and beyond. For instance, the “six strikes” system’s presumption of guilt may in some cases contradict the ruling in Online Policy Group v. Diebold that a party may not claim infringement when it “should have known if it acted with reasonable care or diligence” that the claim is false. One of the CCI’s preselected appeal grounds—namely the “pre-1923 defense” that the work in question was published prior to 1923—certainly seems like reasonable information to obtain prior to making an infringement claim.

Perhaps more troubling are the privacy implications of ISPs snooping on their customers’ Internet traffic in order to uphold their obligations to the CCI. At the time the new system was implemented, the Electronic Frontier Foundation (EFF) noted significant concerns with the manner in which this monitoring may be carried out. Presumably ISPs have updated their terms of service and other privacy notices, if necessary, to avoid being charged by the Federal Trade Commission (FTC) with using deceptive practices. But the lack of customer input and potential abuse of this packet sniffing may eventually lead to further legal battles, so this will almost certainly not be the last we hear about policies to curb copyright infringement through file-sharing software.

by Pete Swigert, Sufia Siddiqui, and Jason Ost

What does it mean to be an Online Review Site?


What is Section 230?

Section 230 is the “47 U.S. Code § 230 – Protection for private blocking and screening of offensive material” and is part of the Communication Decency Act. This section has protected many internet intermediaries, encouraging innovation and development, and is considered one of the important law protecting internet speech. Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[1] This means that internet intermediaries are:

  1. Protected from liability for what their users post
  2. Protected from liability for their actions in blocking content that they object
  3. Protected from liability in creating technical tools to assist in the restriction of access to content.

Without Section 230, online platforms would have to screen and monitor all the posts and comments that were submitted.

How does Section 230 apply to Review Platforms?
Section 230 was established as part of the CDA in 1996 and it protects internet service providers and various “interactive computer service providers” that publishes 3rd party content. Review sites are included in Section 230 and this is crucial for such platforms because while most reviews are positive, some are negative reviews which might negatively affect businesses. As a result, businesses turn to such platforms with legal action instead of the speaker, which is why Section 230 is important in protecting these platforms.

Suppose you are a review platform and you deliberately removed content, do you then become liable for the content?
Under Section 230, review platforms are protected to edit and remove content: “No provider or user of an interactive computer service shall be held liable on account of…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” For that reason, review platforms can remove entire posts, if needed. However, there is still some limitations on to what extent online review sites can edit and change the content of the original message. Internet intermediaries are not to alter the meaning of the content.

What are some of the social implications for allowing Section 230 immunity on review sites?
Immediacy of Damage
Section 230 is widely recognized for allowing the proliferation of internet speech by removing internet provider’s liability for third-party content that they publish. However, there are many social implications and effects for this. Online review platforms have a very large audience reach and therefore, false or defamatory reviews can cause a great deal of damage to businesses. Moreover, the damage can be quickly felt for businesses. Even though the post was falsely made, the time needed to file for take-down may already cause significant damages to individuals or business owners.

Democratizing Readers Ability to be a Critic
Before the age of the internet, critiquing was mostly remained as a job for journalists, editors and the like. Back then, individual complaints about businesses was mostly done through word of mouth and therefore, it was unlikely that people’s complaints would reach a massive audience. However, that has changed as everyone is now able to voice their opinion, good or bad, true or false, with the rise of online platforms. Nowadays, it is very easy for customers and users to post a negative review about their experience as a way to express themselves.

Implications on Marketing and Branding
Section 230 has allowed review sites to flourish, providing consumers the ability to evaluate and research on companies and products. Consumers are now able to quickly search online and gather information about the company or product. Moreover, this has made marketing incredibly difficult as companies try to ‘correct’ their image from what is portrayed on online review sites, forums and more.

Technical Challenges in Finding Original Author
Trying to find the author of certain content can be difficult, particularly for platforms that support anonymity. In that case, take-down requests can be difficult as it is technically challenging to trace the author of a defamatory or false post on a review site.

What if the posted content infringes on copyright laws?
Challenges in Finding the Owner of the Content
However, in regards to content that crosses with copyright laws, this might bring up a tricky situation. In that case, under 17 U.S. Code § 512, or referred as the DMCA, service providers are under some protection, only if they comply with certain criteria: “service provider shall not be liable for monetary relief, or, except as provided in subsection (j), for injunctive or other equitable relief, for infringement of copyright by reason of the provider’s transmitting, routing, or providing connections for, material through a system or network controlled or operated by or for the service provider.” DMCA will require the provider to take down the content immediately after the notice is filed.

What are some of the cases that demonstrate Section 230 on Review Sites?
We are going to give examples of two cases against Yelp where section 230 protected it against business seeking legal action over the material they posted on their site as well as show ‘discretion’ they have over the information they receive as well as show.

1. Levitt v. Yelp

Why is this case selected?
This case is a good representative of the majority of concerns related to online review platforms. The damage to businesses from receiving negative reviews on such platforms has become so powerful that businesses are prone to raise questions of unfairness, skepticism towards online review sites.

Case Details/Facts:
In March 2010, Boris Levitt, owner of Renaissance Furniture Restoration in San Francisco brought a case against Yelp, the online business review website. Levitt claims that the review site manipulates and manufactures reviews as a way to coerce businesses to advertise on their site. Levitt argues: “Yelp may manipulate which user or customer reviews are filtered (essentially suppressed from general public view and not considered as part of the star rating), which affects and controls the business’s overall star rating.” Levitt claims that Yelp was involved in creating negative comments, which the court later dismissed for lacking “factual evidence.”

Moreover, on Yelp’s review terms, they have explicitly stated that Yelp has the ability to remove content. However, the design of Yelp focuses on the business’s overall ratings. Any removal of the posts has a direct effect on the overall ratings of the business: “Yelp may also represent to a business that it has the ability to remove reviews, which would affect and control the business’s overall star rating.”

How Section 230 Was Applied?
In his ruling Judge Chen used section 230 to rule that Yelp can not be held liable for what is posted on their site based on the fact that they were not the ‘internet content provider’. In his opinion he asserted that “§ 230(c)(1) contains no explicit exception for impermissible editorial motive.” And contrasted it with section 230(c)(2)’s which has a “good faith” requirement, saying that the absence of a parallel “good faith” requirement in 230(c)(2) means editorial intent is irrelevant to 230(c)(1).

He further went on to assert that “traditional editorial functions often include subjective judgments informed by political and financial considerations….Determining what motives are permissible and what are not could prove problematic.” He concluded that trying to force Yelp! to defend its review policy would ultimately make them defend the reviews they showed on a case by case basis.

Our opinion on whether the outcome of this case decision is fair:
The decision was a fair decision as it provided immunity for platforms like Yelp against being charged with content they show as their own content. Writing for the New York time, internet cases blogger Eric Goldman noted that, “Having that airtight immunity means websites can make tough editorial decisions without worrying what kind of story those decisions will ultimately tell in court” and believes that ultimately this gave Yelp ability to manage their customer reviews websites however they want. [2]

As such this ruling gave review websites discretionary control over how to handle customer reviews and thus can resist complaints from business seeking to have certain review taken down or request what they should do with reviews. This ruling in contrast with the Fair Housing Council of San Fernando Valley, et al. v. LLC  (Majority Opinion Only) where the court ruled the Roommates encouraged/promoted what users posted hence should be liable. The judgement shows that as long as an website is not actively involved in helping structure what users post, it can not be held liable. An aggrieved person/business is thus only supposed to sue against the creator of the post/review.

2. Demetriades v. Yelp

Why is this case selected?
This case outlined what online review sites are able to do with the reviews they get and the protection that they get as a ‘Good Samaritan.’

Case Details/Facts:
In 2012, Demetriades, an operator of three restaurants in Mammoth Lakes, California, filed a case against the online business review website Yelp citing untrue or misleading advertising and unfair business practices because of ability its system for filtering comments and reviews. The Yelp website is used by many people to review businesses that they would have used and can either give a comment about the business or give a review on a scale of 0 to 5 with 5 being the best rating. Yelp on its part filters the reviews and comments given by users claiming it ‘tries to rank them according to their usefulness. The actual review criteria used for ranking is not publicly disclosed to anyone hence some businesses feel it only shows negative comments.

How was Section 230 Applied?
The court in its ruling asserted that “statements regarding the filtering of reviews on a social media site such as are matters of public interest” and hence was protected by section 230. Essentially the court ruled that a website cannot be held accountable on what it publishes based on information provided by third parties.

The following are some final thoughts on our guide for online review sites.

What are the social obligation of review sites?
Review sites are oftentimes designed in a way that makes it look like a free space for individuals to voice their concerns. Authors become unaware that they are still liable for what they share on the internet. For example, in the Dietz vs Perez case, Perez who is the author of a Yelp review post was charged for monetary fees in damages. Perez said that “when she posted her reviews it never occurred to her that she might end up in court or on the hook for thousands of dollars in legal fees — not to mention the monetary damages.” Review sites should strive to make it obvious for people to know that they even though they are not directly messaging businesses, they are still liable and responsible for what they write.

What should review sites strive to show?
The judgement in the case of Demetrias vs Yelp in line with the Congressional findings outlined in section 230, that ‘interactive computer services’ provided users with forums for various discourse hence what is showed on review sites is of public concern. Online review sites should thus strive to show material that is pertinent to the audience, so be deemed ‘reliable’ and thus remain useful. Furthermore as data volume and velocity increases the decision allows review systems to deal only with managing the data instead of the worrying too much on the content. With this burden removed sites can focus more on innovation aligned to making the forums better in the users’ view.

 -Chalenge Masekera & Jenny Lo

The Patent Battleground

The U.S. Constitution states that “Congress shall have power…To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.” This statement is the basis of our patent system. However, in recent years, the patent system of the United States of America has been used by parties colloquially referred to as “patent trolls” to launch hostile litigation against businesses large and small based upon dubious motivations. Many now believe that the patent system has been co-opted by self-interested actors and is no longer serving to promote the progress of science and useful arts, but to launch hostile litigation that only serves to harm other businesses. A system designed to promote progress of science and useful arts is increasingly being used to prevent such progress. In some cases the patent trolls are targeting innocent businesses with limited resources, hoping to intimidate their victims into handing over fees rather than spend precious resources fighting the troll in court. In other cases, sophisticated companies employ highly skilled lawyers and engineers to engage in corporate warfare against their competitors. Two major events in the past few months highlight the issues associated with the patent system and the ongoing push for patent reform: the passing of the Innovation Act in the US House of Representatives, and the massive litigation launched against Google and other Android developers by Rockstar Bidco, a consortium holding the coveted Nortel patents owned by Apple, Microsoft, RIM, Ericsson, and Sony.

Patent Reform in Washington DC

Dysfunction in the US patent system has gotten so bad that politicians are feeling compelled to act. In December of 2013, the House of Representatives passed the Innovation Act with large majorities of both parties backing it. The Innovation Act, sponsored by Rep. Bob Goodlattee (R-Va.), includes a number of important provisions designed to improve the current system.

Protecting End-Users

One of the types of patent trolls are bottom feeder entities that sue hundreds or thousands of small businesses for allegedly infringing on a patent — for example, suing coffee shop owners who offer Wifi to customers. This behavior is hurtful to small business owners and the health of the economy. The Innovation Act will allow technology companies to fight the lawsuit on behalf of the end users of their products, so the patent trolls will have to face much more formidable entities who have the proper resources to defend against the frivolous litigation.

Moving the Discovery Phase of Litigation Process

One of the most costly elements of the current patent litigation system is the discovery process, which can require defendants to expend exorbitant costs surfacing millions documents in order to build a case. The Innovation Act moves this phase of the litigation process until after the court addresses the legal questions surrounding the patent claim, so that defendants may not have to take on the expense of going through the discovery process to fight off a frivolous lawsuit.


The Innovation Act will require plaintiffs that lose their patent litigation to pay the fees of the defendant. Victims of unjust patent litigation will be able to more easily recover from the costs inflicted upon them by patent trolls. Furthermore, as a way of addressing the fact that often patent trolls are acting on behalf of other entities, if the patent troll is unable to pay the fees, a judge may require that other parties with a financial stake in the plaintiff’s lawsuit pay the fee.

Heightened Transparency

In some cases, the patent troll that files the patent litigation against others is being controlled by other entities. The Innovation Act will require the patent plaintiff to disclose the names of all other entities with financial interests in the patent that is being litigated. Perhaps patent owners who are using their patents not to build useful products but to attack others may be less inclined to do so if their actions have a greater chance of being linked to them and potentially damaging their reputations.

Increased Specificity of How Patent was Infringed

The current system allows patent owners to file litigation against other parties based on vague justifications. The Innovation Act will require plaintiffs to describe in greater detail how their patent was infringed upon.

The Innovation Act has been supported by the tech industry, which may be indicative of how the bill is targeted much more powerfully against bottom-feeder patent trolls than sophisticated patent trolls. Opponents of the Innovation Act include the biotechnology and pharmaceutical industries and patent lawyers. The Innovation Act has not yet moved through the Senate, so in the meantime the White House is working to advance patent reform measures on its own. The administration will work with the private sector to provide better technical training to patent examiners and to make available more “prior art.” “Prior art” is a term that describes all publicly available information that is relevant to determining a patent’s originality. This should help the Patent and Trademark Office determine whether or not a patent idea is original, because if a patent idea has been described in prior art the patent no longer has a claim to originality.

While the provisions of the Innovation Act, if passed, will help alleviate some of the issues caused by bottom-feeder patent trolls, it does not address all of the dysfunction of the current system. The warping of the patent system from a way to protect the intellectual property of inventors who want to bring a product to market to a battleground of corporate warfare is still a large issue. Patent trolls work to acquire broad patents that arguably should not have been granted in the first place, and use them to launch sophisticated attacks against technology companies. Patent reform advocates believe that the US still needs to find a way to efficiently invalidate low-quality patents. One policy that has been proposed in the Covered Business Method program, which is supported by Google but opposed by Microsoft and IBM. Aspects of this problem are illustrated by the case of Rockstar and Google.

Rockstar, Google, and the Patent Wars

On the other end of patent trolling are that larger companies that use patents as ammunition against competitors. This most noticeably happened during the bidding war with Nortel Networks, a Canadian telecommunications company that filed for bankruptcy in 2011. After Nortel went under there was some 6,000 patents up for grabs, which sparked the bidding wars between Google and Rockstar Consortium, which is comprised of Apple, Microsoft, BlackBerry, Ericsson, and Sony. Rockstar outbid Google and bought the patents for $4.5 billion and have used the patents as leverage to extract royalties and file patent disputes with various other companies. Rockstar went on to file lawsuits against Google and other companies, such as HTC, Huawei, and Samsung for infringements in smartphone technology. Most recently Rockstar also set it sights on cable companies that use Cisco products that fall under Rockstar’s patents.

Whether Rockstar’s disputes are legitimate or not are not the real issue but the kind of power that it gives them over other companies. The bid for these patents was a bid for power. A power that allows its owners to accuse and dispute any similarities in technologies, and try to extract royalties from companies that use said technologies. A power that allows one company to significantly stifle any new developments in their products and allow an unfair advantage. As Uncle Ben said to Peter Parker (Spiderman), “With great power comes great responsibility.” Rockstar could learn from this because their use of Nortel’s patents and patent law could cause problems for many startups. These startups could easily be hit with a lawsuit for patent infringement from any of the Rockstar companies. A lawsuit that they are most likely doomed to lose against Rockstar’s army of lawyers. This and other patent ownership laws could be stifling to innovation and growth for many budding companies. The Innovation Act is just one small step in eliminating patent trolling but it is far from resolving the larger problems that give the bigger guys more power.


Lee, Timothy B. 2013. “Patent reform bill passes the house 325 to 91. Here’s what you need to know.” The Washington Post, December 5.

H.R. 3309–113th Congress: Innovation Act. (2013). In Retrieved March 4, 2014 from

Byers, Alex and Mershon, Erin. 2014. “White House pushes forward with patent reforms.” Politico, February 2014.

Kerr, Dara. 2013. “Google, Samsung, and more sued over Rockstar’s patents.” CNET, October 2013.

Musil, Steven. 2013. “Apple, RIM in group buying Nortel patents for $4.5B.” CNET, June 2011.$4.5b/

Mullin, Joe. 2014. “Cisco moves to fend off Rockstar patent assault on its customers.” Ars Technica, February 2014.

Copyright Law in a Digital World


Copyright is straightforward when it involves only a single author: anyone who creates any work which is more original than a phone book and fixed in some tangible medium automatically receives copyright ownership (under the Berne Convention Implementation Act) and can exercise any power granted under 17 U.S.C. Section 106.  The recent 9th Circuit ruling of Garcia v. Google, however, demonstrates that when multiple, conflicting parties are involved, application of copyright law quickly becomes muddled and ambiguous.


Cindy Garcia was paid approximately $500 for three and half days of filming of a minor role in writer and producer, Mark Basseley Youssef’s film Desert Warrior. This footage, however, was not used for the film described to Garcia at the time, which she thought was going to be an adventure themed film. Rather, Youssef used the footage in a scene for an anti-Islamic film, Innocence of Muslims, hosted on  Following an issuance of a fatwa by an Egyptian cleric and numerous credible death threats, Garcia sought a preliminary injunction against Google from hosting the video on the grounds that the video violated her copyright.  The 9th Circuit overturned a District Court ruling denying her the injunction, but only after navigating a complicated web of determining who actually owned the copyright, what licenses were granted, and how hosts such as Youtube should react when such unclarity exists in content they are hosting.

Who owns the copyright?

The Copyright Act of 1976 offers little guidance in determining who owns Copyrights in multi-party productions of works, and this led both the majority and the dissenting opinion of Google v Garcia to scrutinize who actually owned the footage of Garcia.  If it were clear that Garcia was Youssef’s employee, or there were an explicit written agreement, Youssef could have claimed that any work done by Garcia constituted work-for-hire and any copyright was his and not Garcia’s.  In this case, there was no explicit written document, and the majority and dissent split on whether less than four days of work, and the lack of traditional employment benefits such as health insurance, constituted employment.

The dissenting opinion even denied that Garcia had any copyright interest in the film at all, even though Section 102(a) specifically lists dramatic, motion picture and other audiovisual works as protectable, and the majority asserted that an acting performance, “no matter how crude, humble or obvious,” passes the threshold of original according to Feist v. Rural.  The dissent argued that this is a necessary but insufficient grounds to establish copyright interest, because Garcia did not fix her work in a medium and she did not qualify as an author due to her role as an actress and not a producer.  If this sort of argument prevails in future rulings, it could place in jeopardy the control artists have over their performances and recordings created from it.

What licenses are implied?

Even if Garcia unambiguously owned the rights to her work, and there was no explicit license granting Youssef to produce derivative works, there is still a question of what rights may be implied by such a transaction.  In the film industry, there is precedence that paying an actor to perform creates an implicit license, since otherwise the footage would unusable by the producer.  However, the fact that Youssef misled Garcia about the nature of the film’s production led the court to conclude that the any implied license was violated and did not apply.

The situation was further complicated into “an impenetrable thicket of copyright” because if Youssef did not have an implied license to Garcia’s performance and footage, then Youssef could argue that Garcia herself may have committed copyright infringement by performing the script owned by Garcia, and therefore has no copyright interest in the derivative work.  In fact, as the court asserts, the screenplays are copyrightable and clearly owned by Youssef in this case, and film footage based off of such copyrighted screenplays are derivative works.  Then again, being paid to perform a script may imply a license in the opposite direction from Youssef to Garcia.  The court does not settle the matter, since all that is necessary to obtain a preliminary injunction is for the plaintiff to prove their likelihood of success.

Importance and Relevance

This case provides insight on how copyright laws are treated in today’s collaborative economy and how technology companies are addressing copyright laws from their perspective. In particular, because multiple parties in the production of a work were involved and there was a lack formal legal agreements, Garcia v. Google highlighted the difficulty of determining who owns copyright interest in various parts of the work, and what rights were granted in licenses both explicitly and implicitly, before the court can apply Sections 102, 103 and 106. Furthermore, the case raised questions as to how a content host such as Google should treat these sorts of conflicts.

Section 102 is directly applied to this case because it requires identification and determination of the author of the footage. Section 103 is concerned with the derivative works of the footage in which Youssef created a derivative work based on Garcia’s performance for his own purposes of creating the film, Innocence of Muslims. Section 106 ties into this case in terms of dealing with the exclusive rights of Garcia’s claimed copyrighted material, which includes the issue at hand of its distribution to Youtube’s hosting services.

Furthermore, we learn how mediums that host copyrighted material deal with the issue of copyright matters. Rather than file suit against with Youssef, Garcia filed against Google for hosting the content on behalf of Youssef. Even with their policies and systems of determining copyright infringement, Google has not been able to develop a robust model such that it takes into consideration joint works that become one form of content and seek permissions of all players involved with the content that is to be hosted on their Youtube platform.

Because Garcia and Youssef did not have a concrete agreement, everything was carried out based on implications and assumptions. Just as Garcia was led to believe this was going to be for the purpose of an adventure film, there was no clear understanding of where the material was going to be distributed. Based on the preliminary injunction, Garcia has demonstrated the high likelihood that the footage used in the film was her copyrighted material.  If anything, though, this case is a warning to all actors and performers that they need to make the terms of their work explicit in writing and not rely on the customs and norms of an industry for protection of their interests.

What problems arise from this law’s interaction with technology and society?

Since there wasn’t an explicit agreement between Youssef and Garcia, applications of Sections 102, 103, and 106 must be closely examined based on their implied agreements and how and where the content was used in separate instances. Using the definitions in these sections, it is necessary to see if the content at question meets the definitions and requirements. If there was an explicit agreement, the terms and conditions would have provided confirmed detail of whether or not copyright laws should be applied or not.

The problem that arises with application of copyright laws in the domain of technology is how to protect copyrighted material in an effective and efficient manner.  For example, Google has catered to the the DMCA and developed a system, Content ID, for its Youtube hosting service to identify what content is copyrighted and what is infringed upon.  Nevertheless, the system is passive in that it scans the music and video content to see if it has been copyrighted elsewhere.  Although Google has taken the initiative to go above and beyond what the DMCA requires, the scan is not thorough in that it does not take into consideration the actors involved and check for authenticity of the authors. The only authentication of authorship Youtube considers is the user who uploads the content to see if its their original work.

In an earlier case study of Google’s approach to dealing with the DMCA and copyright infringement of content in its search query results, Google demonstrated that it considers content to be copyrighted when users file a notice of copyright infringement.  In other words, Google deals with copyright infringement by passively reacting to a claim at face-value, rather than actively investigate that each piece of content being hosted on its services goes through a verification system to prove originality and authenticity of copyrighted material. Google’s passive approach also depends on users and society to participate in deciding what is copyrighted material and what is infringed upon by having users file complaints or requests for take-downs of content that is considered to be copyrighted and not belonging to the individuals who have uploaded the material.

Furthermore, society has a say in what is and what isn’t copyrighted material through this request system. Using the quick scan of Content ID and user feedback, Google has shown that it has taken some prerogative to dealing with copyright issues yet the system is “broken” in that its trusts the requests at face-value rather than undergo further investigation itself. Google trusts and assumes participants have a clear understanding of who owns the copyright to a given video and expertise of the standards and criteria for what is copyrightable, and does not verify validity, or investigate if the claimants have ulterior motives for filing requests. Thus, the application of copyright law has become a collaborative effort between users and distribution channels.


Feist v. Rural, 499 U.S. 340 (1991):

Garcia v. Google, Inc. [pdf]:

Copyright meets “Innocence of Muslims”: Ninth Circuit orders removal of movie from YouTube, on copyright grounds:

Youtube’s Letter to Users Addressing Copyright Issues:

Content ID Case Study:

Fair Use and Copyright: GoldieBlox, “Girls,” and the Beastie Boys


1. Facts

In November of 2013, a company called GoldieBlox released a video ad set to the tune of the Beastie Boys’ 1986 song titled “Girls,” but with different lyrics, sung by girls instead of male voices. (Link to video: ). Within the month, three events unfolded:

1) The surviving Beastie Boys members threatened to sue GoldieBlox for copyright infringement.

2) GoldieBlox then filed a pre-emptive claim in California court, asking to have their “Girls” video declared legitimate “fair use” and thus not copyright violation.

3) GoldieBlox announces that their video is legitimate “fair use” but nevertheless updates this video with a different soundtrack. They cite their motive for changing the soundtrack as love and respect for the Beastie Boys, as well as their respect for the wish in the will of a Beastie Boys member (who had passed away) to never have his music used in any ads. (Updated video: )

On December 10, 2013,  the Beastie Boys filed a legal complaint of copyright and trademark violation by GoldieBlox. (In November the video had gone viral, receiving over 9 millions views, in part because of the legal and gender controversies). We will discuss only the copyright issues for our blog entry.

2. Fair Use Criteria

We will argue that GoldieBlox has a very strong case for fair use, since their video seems to be a parody with a strong transformative effect on the original video. There should therefore be no copyright violation. However, we will still need to weigh all four factors that determine fair use, since even in strong parody cases, the Supreme Court has ruled:

“The more transformative the new work, the less will be the significance of other factors, like commercialism, that may weigh against a finding of fair use. The heart of any parodist’s claim to quote from existing material is the use of some elements of a prior author’s composition to create a new one that, at least in part, comments on that author’s work. But that tells courts little about where to draw the line. Thus, like other uses, parody has to work its way through the relevant factors.” (Campbell v. Acuff-Rose Music (92-1292), 510 U.S. 569 (1994))

1) The first criterion of fair use pertains to the “the purpose and character of the use.”

Although the GoldieBlox video ad is clearly for commercial purpose (for their company, if not a specific toy), there are strong precedents for uses that are “transformative” as per both Perfect 10, Inc. v., Inc. and Campbell v. Acuff-Rose Music. The more strongly transformative a work, the more that factor would outweigh the commercial goals. Both Perfect 10, Inc. v., Inc. and Campbell v. Acuff-Rose Music cite the legitimacy of strongly transformative use. GoldieBlox has a strong case for parody because while the Beastie Boys song described sexist gender roles to a point of absurd exaggeration, the GoldieBlox version used the same tune to show girls rejecting traditionally stereotyped passive, domestic, or beauty-oriented roles in favor of engineering and technical creativity. Their progressive message serves purposes that can be also be seen as educational, given our society’s deep-rooted problems in gender equity (in STEM work specifically but also across the entirety of society). The transformative character of the parody is almost too obvious to belabor here, but here are quotations from the two different versions, and links to the two different lyrics. Beastie Boys,”Girls – to do the dishes…Girls – to clean up my room,” vs. GoldieBlox, “we are all more than princess maids…Girls, to build a spaceship…Girls, to code a new app.” ( , ) It is hard to imagine a more explicit transformation of a song’s meaning into its opposite in order to convey a criticism and commentary.

2) The nature of the copyrighted work

The Beastie Boys original song is a creative expression and a work of music, and so clearly deserves full copyright protection as a typical example of what copyright law is designed to protect and encourage.

3) The amount and substantiality of the portion used in relation to the copyrighted work as a whole

The entirety of the Beastie Boys song (with different lyrics) was used, and neither party disputes that fact

4) the effect of the use upon the potential market for or value of the copyrighted work

Ironically, the Beastie Boy’s copyrighted song may actually go up in popularity and value, thanks to the media frenzy around this controversy (as well as the quality and  creativity of the ad). In addition, the Beastie Boys are not claiming financial injury of any kind. So it seems unlikely that the ad will be penalized for copyright violation on the basis of this factor.

3. Lingering Questions

The case will likely be determined by the first of the four fair use factors. We have argued that the precedents and specific case for parody is quite strong for this example, given the past importance placed on the freedom to create highly transformative works. We might even ask whether any amount of commercialism would be large enough to outweigh a strong parody.

One of our writers, Kiki, published the toy commercial only to her friends on Facebook late last year. Apparently she shared the video without the awareness that this video could be at risk of breaching the law. To some extent, people like Kiki helped the spread of the video and thus it became viral. Suppose that GoldieBlox failed in this case, should the personal use of infringed copyrighted work be enjoined? Social media communications have virtualized traditionally private spaces, and technology beyond iFrames have allowed stronger embedding and linking within our content sharing.


Pertaining to another part of our reading, we recommend improvements to Section 117.

During our exploration for blog topic, we find that all sections in copyright clearly defined its subject matter as content: literary works, musical works etc. Section 117 to some extent defined the copyright of software (and implicitly, games) but addressed the work with respect to physical copies. Nowadays, especially with the development of cloud services, back-ups in remote servers (possibly distributed) replace the duplication or adaptation in case of maintenance or repair. As software goes online, the copyright clause is in need of improvement accordingly. The “machine” referred to in the clause does not have simple boundaries anymore in our current era of distributed computing (distributed hardware, functionalities, infrastructure, platforms, applications, and software). In Mulligan, Han, Burstein, “How DRM Content Systems Disrupt Expectations of Personal Use, ” the scope was explicitly aimed at our traditional expectations of music and film media consumption, but we are rapidly developing new expectations and norms around both newer media such as games and software(interactive media apps), and newer sharing or experience mechanisms.

— Renu Bora and Kiki Liu


Beastie Boys original “Girls” song

GoldieBlox original ad video

 GoldieBlox updated ad video

U.S. Code › Title 17 U.S. Code: Title 17 – COPYRIGHTS

Campbell v. Acuff-Rose Music (92-1292), 510 U.S. 569 (1994)

Perfect 10, Inc. v., Inc.508 F.3d 1146 (9th Cir. 2007)

Mulligan, Han, Burstein, “How DRM Content Systems Disrupt Expectations of Personal Use”


Location, Location, Location: Trends tracking offline consumer geolocation data

Philz Coffee Marketing with Euclid Intro Video

Imagine picking up your morning coffee at Philz before class, and noticing the usual 8:45 a.m. rush has not kept you waiting the last few weeks like it did in the beginning of the semester. You also happily notice that Philz has started offering a new berry protein kale snack which is perfect for your post-yoga pick-me-up. It’s like Philz can read your mind, and you feel slightly guilty for Starbucks-cheating last week when they offer you a half off skinny latte for being such a loyal customer. With such efficient crowd control, specialized merchandize, and seemingly random loyalty generators, why would anyone go somewhere else?

Though that story is entirely fiction, the fact Philz likely knows how often you visit, how long you wait in line, what other coffeeshops or public locations you’ve been to in the last several months is not. Feel relieved you’ve never purchased from Philz? Well if you’ve walked by a store, they know that too.

Philz Coffee is one of many commercial businesses purchasing plug-and-play sensors from data analytics companies to track customers’ wifi-enabled smartphones and customer data for marketing and business analytics. This post will focus primarily on Euclid Analytics, though there are many others offering similar platforms like Turnstyle Solutions and iBeacon (see previous blog post). This post will examine the privacy implications specific to this use of geolocation data and explore the need for privacy frameworks in this type of consumer data collection.

How does Euclid’s technology work, and what data is collected?

Euclid Analytics offers a plug-and-play sensor, data processing, and analytics platform that allows businesses to purchase and install a small sensor and instantly begin collecting data from customers who are carrying a wifi-enabled mobile device. If the wi-fi capability on a device is turned on, but not necessarily connected to a hotspot, the Euclid sensors collect the location, manufacturer code[1], and MAC addresses which are scrambled using a one-way hashing algorithm and then transported using a Secure Sockets Layer (SSL) to be stored at Amazon Web Services (AWS).[2]  The one-way hashing encryption generates the same scrambled number for each MAC address, so repeat visits and cross-store triangulation can be tied to the same version, even if it is not formally linked to an individual identity. The sensor is able to track smartphones within 24,000 square feet[3] and can pinpoint a customer’s location within a 10-foot radius.[4]

The range and precision of Euclid devices may pick up on consumers who are never patrons of a Euclid-powered store, and perhaps even pick up signals from nearby patrons of other stores. As Euclid’s business base expands, they are able to match your location to not only multiple Philz Coffee locations, but any other store that utilizes Euclid technology. This technology is useful to determine not only where a customer has been, but how much time they spent in line, in the bathroom, or browsing a certain section.

Why could this information be sensitive?

Recently in London, a marketing firm announced the deployment of trash cans that track the unique hardware identifier of every Wi-Fi enabled smartphone that passes by.[5] Location data can reveal very private information and put user at physical risk. Identified by MAC address, the location data of the user can be determined exactly. While the Mobile Location Analytics (MLA) companies attempt to persuade the public that the transmitted data are all aggregated and can not be identified to an individual and device, the data itself reveals certain attributes of the customer. For example, a device that goes into women room most likely belongs to a female. Moreover, the combination of MAC address and any unencrypted traffic that may leak out can be a powerful database used for nefarious purposes.

Privacy policies? Notification? Opting out?

Last October, to eliminate the privacy concerns among public, Euclid announced the adoption of the MLA Code of Conduct[6], which is cooperatively drafted by seven location analytics companies, government officials and public policy advocates.[7] As a self-regulatory and enforceable standard, MLA Code requires the participating companies must[8]:

– Provide a detailed privacy notice on their website describing the information they collect.

– Promptly de-identify or de-personalize the MAC addresses they collect.

– Ensure that MLA data is not used for adverse purposes (like employment or healthcare treatment eligibility, for instance).

– Retain MLA data for a limited time only.

– Provide customers with the opportunity to opt out of mobile location tracking.

To analyze the capacity of MLA Code to support privacy, we refer to the privacy framework[9] and find while MLA Code initialized a good start, there are certain design flaws in the MLA Code can be improved.

First, the collection of location information is not clearly restricted to inappropriate context. Although the MLA Code requires the data must not be used for adverse purposes, the scope of adverse purposes are loosely defined as “employment eligibility, promotion, or retention; credit eligibility; health care treatment eligibility; and insurance eligibility, pricing, or terms.” Apparently, the scope is too narrow to consider the large potential misuse, and implies that as long as the MLA data are used for business analytics, the retailers are protected to access and use these data in any forms. While Euclid provides Euclid Express, which is a free service allows every individual user to easily install and access the data, Euclid does not specify how they ensure the usage is qualified as an appropriate context. We think the definition of context should be complemented with more specific variables, for example, certain physical places like bathrooms, hospitals, and hotels may be inappropriate contexts to retrieve customer’s data even for business analytics.

Second, the user-control and consent are vulnerable under MLA Code. Although the MLA Code requires the MLA companies and their clients to inform the customers about the collection and usage of MLA data, it sets an exception for the data are not unique to individual or device. Also, the MLA Code designed the consent to be operationalized as an opt-out, which in our inspection is a relatively difficult step for customer to take. Without the active notice by retailers, we assume most of the customers are not even aware of the MLA technology, and under this circumstance, how can we expect customers to actively take additional effort to submit their opt-out request? We went through the opt-out process of Euclid[10], and found out that after submitting, it takes a surprising long time of 7 days to successfully delete the applicant’s MAC address from their database.

Finally, the secondary use of the data are approved in the MLA Code. The fifth principle in MLA Code allows MLA Companies to provide MLA Data to unaffiliated third parties as long as the use of MLA Data by third party are consistent with the Principles of this Code. This principle permits the secondary use will not require user’ consent, and thus can put privacy in a vulnerable state.

Privacy frameworks and businesses

Identification of gaps in the current privacy framework is a topic of research in many organizations, especially the government. As per a study done by the US Government Accountability Office in September, 2013[11], there is no overarching federal privacy law for governing the collection and sale of personal information, including information resellers like Euclid Analytics. Instead there are a number of laws which are tailored for specific situations and purposes. An example cited is the Fair Credit Reporting Act which limits the use of personal information collected or used to help determine eligibility for such things as credit or employment but does not apply for marketing. There are some that apply to healthcare providers, financial institutions or the online collection of information about children.

Although private sector companies argue that the current statutory framework for privacy should suffice, the study found gaps did exist and the current framework did not incorporate the Fair Information Practice Principles.

A majority of the businesses are of the view that an overarching privacy framework would inhibit innovation and efficiency and at the same time reducing the consumer benefits such as relevant advertising and beneficial services. The private sector prefers self-regulation over additional legislation. As per them, additional regulation would be especially hard on small businesses and start-up companies as it would raise compliance costs and thus hinder innovation and economy in general.

We believe a comprehensive privacy framework for businesses which is ‘sector agnostic’ would be welcome. Lot of the self regulation of privately held businesses is arbitrary in nature and a good majority of them fail in providing adequate privacy protections. Given the plug and play nature of many of these MLA platforms, some businesses may not realize there are privacy implications involved with using this type of analytics. It is easy to put privacy best practices on the backburner when faced with business challenges. The argument about decreased commerce due to increased regulation is also suspect. While it is true that targeted marketing has increased conversion rates with regards to ‘potential buyers’ to ‘buyers’, privacy groups have argued that increased privacy protection has actually increased consumer participation. Encryption technologies that consumers know about promote the confidence to engage in transactions using these technologies.

Having said that, a careful study of the impact of such a regulation on businesses, especially small business, should be undertaken so as not to over regulate and limit the information available to them. A carefully prepared list of acceptable and restricted types of personal information that can be collected should be used and regulation should be applicable only for restricted personal information.

Recommendations for future action

Any future privacy framework has to be compliant with the Information Practice Principles.

-A person should be provided sufficient notice about the information practices before the collection of data. This would exclude companies such as Euclid from gathering data from a passerby who is completely unaware of the sensors and their purpose.

-Users should have easy access to their personal information. A mechanism has to be provided to monitor how the information is being used and contest any data that they think is erroneous or undesirable.

Due to relatively large number of companies engaged in retail analytics or information reselling, it is highly impractical for a person to track information across all these databases and exercise his or her powers. Therefore, a single system which is tied to an individual and houses the information collected off of an individual would be ideal. Any person who wishes to track his/her personal information can access the system, see what data has been collected, be it from a visit to Philz or the gym, and the use that it is being put to. Although it might require significant initial investment, such a system should address the privacy concerns and provide a complete picture of a person’s information footprint. If the businesses would like to avoid governmental regulation, the onus is on them to implement such a sector wide system. If the businesses are unwilling due to financial or administrative overhead, the government should step in.

By Sophia Lay, Elaine Sedenberg, and Rahul Verma

[1] Sadowski, Jathan. 2013. “In-store Tracking Companies Try to Self-regulate Privacy.” Slate, July 23. privacy_self_regulation_and_consumer_tracking_euclid_and_the_future_of_privacy.html.

[2] “Easy to Implement and Scale: Euclid Analytics Is the Easiest Way to Measure Visits, Engagement, and Loyalty to Your Store.” 2013. Euclid.

[3] Ibid

[4] Clifford, Stephanie, and Quentin Hardy. 2013. “Attention, Shoppers: Store Is Tracking Your Cell.” New York Times, July 14.

[5] Dan Goodin, 2013, “No, this isn’t a scene from Minority Report. This trash can is stalking you”,

[6] Future of Privacy Forum, 2013, “Mobile Location Analytics Code of Conduct”,

[7] “Senator Schumer and tech companies announce important agreement to protect consumer privacy” Euclid, 2013,

[8] “What is MLA Code of Condcut?”,2013, Future of Privacy Forum


[9] Nick Doty, Deirdre K. Mulligan and Eric Wilde. (2010). Privacy Issues of the W3C Geolocation API. UC Berkeley: School of Information. Report 2010-038.

[10] Opt-out, 2014, Euclid,

[11] INFORMATION RESELLERS:Consumer Privacy Framework Needs to Reflect Changes in Technology and the Marketplace. GAO-13-663: Published: Sep 25, 2013. Publicly Released: Nov 15, 2013.

Minors & Their Social Media Blunders: Why Forgiveness is the Best Policy

While Privacy Rights for California Minors in the Digital World aims to empower teenagers with the knowledge and tools to control their online personas, the legislation is limited in that it provides no additional mechanisms for control beyond the delete buttons that the most popular social media pages already feature. The impossibility of legislating the comprehensive removal of online content posted by or about minors puts the onus on the institutions that deal with minors to take measures towards forgetfulness to avoid punishing individuals who made youthful indiscretions

Many recent examples of minors posting questionable content on social media sites have been well-publicized. In some cases, the consequences are immediate and severe. Last year, a Texas teenager’s admitted sarcastic joke about “shooting up a kindergarten” made on Facebook during an argument over a computer game have him facing ten years of jail time on federal terrorism charges. In a less extreme example, a teenager’s live-tweeting of unflattering comments about her fellow attendees at a college admissions session led the university to blacklist her application. Even high schools are getting into the business of surveilling the online lives of teenagers, with a growing number of school districts paying online firms to track student activity and flag content indicative of cyber-bullying, drug use, or other undesirable behavior.  In other cases, the consequences come long after content is posted, and the individual may in fact never find out that it was their childhood social media persona that cost them a job or employment opportunity. In a Kaplan phone survey, a third of college admissions officers said that they visit applicants’ social media pages to learn more about them, and nearly all of them had discovered information that negatively impacted an applicant’s chance of admission.  And, we can only imagine the election campaign dirt-digging that will occur when the first generation to grow up with social media starts running for office.

A number of complex issues are at play here.

First, do minors between ages 13 and 18 understand the ramifications of posting something online?  We can reasonably assume that most of them haven’t imagined the worst-case scenario that might follow posting embarrassing or sensitive content. This is not to say they are not managing their online reputations at all. A recent Pew Internet study makes it clear that many teens have deleted content that they’ve posted online, and nearly 20% surveyed claimed to regret having posted something online. However, once content is online, there is nothing stopping someone who comes across it from copying it, effectively removing it from the control of the original poster.

The Privacy Rights for California Minors in the Digital World legislation outlines the rights of minors to delete anything they themselves have posted, but because of the nature of digital creation, has limitations in terms of others’ postings that may still impact the online reputation of a user.

Internet users have limited control over the content they post, and almost no control over content posted about them by others. As Peter Fleischer explains in Foggy Thinking About the Right to Oblivion, the cry of “privacy” is often in direct conflict with the U.S. concept of freedom of speech. It would be dangerous to go down the road of allowing individuals to request information about them be removed just because it has something to do with them. The notion of creating a mechanism that allows content to auto-expire after a given amount of time also doesn’t solve the ownership problem: if auto-expiration was technically possible, content could still simply be copied by another user and the setting for auto-expiration removed.

The technical and financial obstacles of making certain content harder to find in search engines or suing anyone who owns or publishes content about you necessitate other solutions to this thorny problem. Minors are particularly vulnerable: being at the prime age for identity experimentation, they’re likely to misapprehend the potential of a single post to change their lives a few years down the road when they’re trying to get into college or apply for their first jobs. And there is a subset of teens that is even more vulnerable according to Nathan Jurgenson for the Snapchat blog:

“It is deeply important to recognize the harm that permanent media can bring—and that this harm is not evenly distributed. Those with non-normative identities or who are otherwise socially vulnerable have much more at stake being more likely to encounter the potential damages past data can cause by way of shaming and stigma. When social media companies make privacy mistakes it is often folks who are not straight, white, and male who pay the biggest price.”

While it’s conceivable that teens could afford online reputation management services at $20 for a year’s subscription, it remains to be seen whether these services are effective. Teens are probably not very likely to successfully get sensitive content about them hidden in search engine results.

We should expect institutions that have dealings with minors to use utmost restraint in gaining access to data about them, especially if it’s not immediately relevant to the relationship we have with those institutions.

Unsettling is also what “delete” actually means. Even if content looks to be removed from the public eye, it is likely stored on a server somewhere, perhaps still accessible to some employees of the social media platform. Legislation in Europe has mandated that it be possible to request all the data Facebook has on a particular individual, which has revealed a staggering quantity of information stored about users — for some, the process yields a pdf document of 800+ pages.

To complicate things further, it’s not unthinkable that even what a minor might maturely and thoughtfully decide not to post might somehow come back to haunt her. Facebook has admitted to storing data on what users type out but then opt to delete, known to Facebook employees as self-censoring. It’s challenging to come up with useful policies and practices if what’s actually being kept flouts all reasonable expectations about behavior — “if I never post it, it doesn’t exist, right?” — and isn’t disclosed in Facebook’s privacy policy.

These issues show that, well-intended legislation aside, it is impossible to provide social forgiveness for minors via solely technical channels. Data Prevention and the Panoptic Society: The Social Benefits of Forgetfulness presents compelling arguments for why forgetfulness is a society-wide, rather than just an individual, benefit. There is a strong parallel between the social benefits of forgetfulness for juvenile crimes, discussed in that paper, and juvenile social media mistakes. Without a doubt, minors are going to do things that seem stupid and that, if they were to reflect on years later, they would regret. Companies and colleges that indiscriminately scrutinize the social media profiles of prospective students/employees will find embarrassing material on nearly anyone with an active social media life. Removing them from consideration will mean that they are missing out on people with a lot of potential. Rather than performing this scrutiny within a policy vacuum, organizations should have guidelines, perhaps influenced by national recommendations, on what is the appropriate type of content and within what time range which is relevant for their decision-making, with an eye toward taking a compassionate and reasonable perspective on content posted by minors.

By Tiffany Barkley and Robyn Perry

The FTC vs. Unfair and Deceptive Practices in the Internet Era

On January 15th 2014, Apple settled a complaint with the FTC and agreed to return at least $32.5 million to consumers for in-app purchases that children made without a parent’s consent [1]. In-app purchase is one of the disruptive innovations introduced by Apple in their search for new revenue streams, and this particular feature allows users to make purchases related to content within the application. For instance, the popular application ‘Dragon Story’ allows users to buy ‘Gold’ within the game using real money. In order to initiate an in-app purchase users are required to enter their password to authenticate the transaction.

This issue before the FTC involved applications targeted towards children which enabled in-app purchases to occur without having to re-enter a password each time (Apple has a practice of caching the passwords for 15 minutes). The result is that following an initial password being entered, children were effectively able to continue using real money for in-app purchases without parental consent. Furthermore, applications identified as suitable for children had in-app purchases designed in such a way that it would be difficult for them to assess whether they were spending real or virtual money. The FTC  argued that this practice was a ‘material’ misrepresentation causing injury, that the injury was substantial and not outweighed by any countervailing benefits to consumers or competition. Moreover it was an injury that consumers themselves could not reasonably have avoided and hence unfair [2]. In one event, a child spent over $2,600 dollars on ‘Tap Pet Hotel’, while there was no way a parent could have known those charges could occur [1].

These facts lead one to ask: How did this situation arise? Apple is reputed to be a customer friendly company, and even, admittedly, had full knowledge of all of the features that drew the complaint from the FTC ( all such applications are examined and approved for their content before entering the apple store). So what accounts for such a widespread, unfair, practice occurring for such an extended period of time without interruption?

In our opinion, in-app purchases are just one technological innovation to come along and show that the FTC’s enforcement mechanism is unable to keep up with the pace of technology in the internet era [3].

This week’s readings cover several examples illustrating the incredible lag between an illegal practice and the FTC’s ability to force an end to such practices. (As a procedural note, one should know that when the FTC pursues organizations that violate its policies on ‘deceptive’ or ‘unfair’ practices, a ‘complaint’ is issued by the FTC commission detailing what violations have occurred.)

In its complaint issued against Facebook in July of 2012, the commissioners highlight eight different counts of violation, which include misleading its users on the terms of its privacy policy, falsely representing site features, providing information to third party advertisers despite pledges to the contrary, and verifying facebook applications as secure despite having not performed any such verification. In its complaint against Google’s ‘Buzz’ product in 2010, the FTC cites the company for using registered Gmail users’ information to populate this new social networking service without seeking consumers’ consent to use their information. But in each of these cases, the FTC’s complaint was issued long after the violations had occurred – for instance, in the Facebook complaint, the FTC mentions violations that began as early as 2007.

This delay seems partly due to the rules which direct the FTC’s investigation and enforcement actions. For instance, even after all of the proper steps have been taken – including an investigation, and a consensus by the commissioners that wrongdoing has occurred, the commission’s findings only become binding 60 days after their issuance – that is, unless that order is stayed by a court.  Furthermore, even if a respondent ignores an FTC commission order, they aren’t even criminally liable. As the FTC states on its website: “ If a respondent violates a final order, it is liable for a civil penalty of up to $16,000 for each violation. The penalty is assessed by a district court in a suit brought to enforce the Commission’s order.” Even in a situation of clear wrongdoing in which the FTC has ruled, a court must still intervene to  provide injunctive relief and acquire damages (keep in mind that each of these actions will take days and possibly years). And what is $16,000 dollars in an era of multi-billion dollar companies, where a day can drive millions of dollars in online purchases?

Setup in 1914, the FTC’s role was to prevent unfair methods of competition in commerce as part of the battle to “bust the trusts.” To protect the consumers from unfair practices and deception, it has been trying to write rules telling businesses how to deal with their customers on an industry-by-industry basis. But, with continuously evolving technologies such as mobile technology, online surveillance systems, automated healthcare systems, networked and connected cars, houses, and locks, come new potential threats to privacy and security of consumers. And today new threats emerge faster than ever, and hence pose a great challenge for the FTC to be able to effectively regulate these industries a timely manner.

Even policies that apply in the case of online services such as Facebook and Google may not be relevant for connected devices and the internet of things. Furthermore, it seems that the delays that have been built into the FTC’s enforcement actions mean even if new rules could be conceived of in time, they would still take months to take effect [3]. Months may not have mattered in the era of television, but in the era of click-commerce, such a delay makes current forms of FTC policing completely ineffective.

As new, connected, technologies integrate more and more into our daily lives, there is an increased threat of attack on consumers, especially in the area of privacy. These threats deserve to be  faced with the same rigor and urgency with which the government tackles them in the physical realm. In a technological society, justice must be imbued with the ability to keep pace with new marketplaces and the threats they pose to consumers.

Can the FTC, as it currently exists, possibly keep up with the pace of new kinds of deceptive practices in evolving technologies? Can its enforcement and investigative framework be updated to respond swiftly to illegal practices? Or does another body, operating under a different framework, need to be conceived of to face these challenges? Such are the questions we will need to answer if our society is to be able to buy, trade, and live safely in the internet era.






Interconnectivity and Lack of Transparency

iBeacon is a new location-based technology that Apple recently announced on iOS. Instead of relying on GPS data, iBeacon uses a Bluetooth low energy signal to identify nearby iPhone, iPad and iPod Touch devices with a high degree of precision [1]. Several retail stores, including Apple and Macy’s, have started using the technology to alert customers of special deals related to merchandise in their vicinity through beacons located throughout the store [2]. This new way to track iPhone users inside buildings raises some serious concerns about consumer privacy, and how well users understand the possible ramifications of setting this micro-location tracker to “on” or “off”. Released last February, the exposition of such Consumer Privacy Bill of Rights principles as Individual Control, Transparency, Respect for Context, and Focused Collection, not only dusts off familiar concerns about how, when, and to what extent companies like Apple, Facebook, and Google might collect personal data, but it also, most remarkably for our purposes, raises questions about the roles, rights, and responsibilities that third-party companies have in the use of consumer personal data.

In the case of iBeacon, Apple clearly states that allowing third-party apps and websites to access your device’s location puts you under the third party’s privacy policy and terms of use, rendering Apple not responsible or liable for how data is collected or shared. But often, apps connect their data to other third-party apps, making it even more difficult for consumers to understand what kind of data they are sharing, and with whom. The privacy policy for a shopping assistant app called inMarket, for example, warns users that third parties may collect many types of data about them by downloading the app: “They may use Tracking Technologies in connection with our Service, which may include the collection of information about your online activities over time and across third party sites or services. Their privacy policies, not ours, govern their practices.” [3]. The assiduous inMarket user might notice that the company considers Unique Device Identifiers as “non-personally identifying information” and asserts that “If we de-identify data about you, it is not treated as Personal Information by us, and we may share it with others freely.” While inMarket may distinguish between personal and non-personal data in its privacy policy, it undermines the Transparency Consumer Privacy Bill of Rights principle by making this information nearly impossible to find (especially on the small screen of a mobile device), and so vague that users are left with a less than meaningful understanding of the privacy risks and options associated with their use of the product.

Issues of companies leaving their users in the dark about what data collection contexts they are in were raised again in late 2013, when Google came out with a new feature for their advertising system called “Shared Endorsements,” which could put users’ names, photos, and Google+ profile next to advertisements for companies that they have endorsed, for each user’s social network to see. [4]  Such a personal recommendation system for advertising raises several questions about the Consumer Bill of Rights, namely its Individual Control, Respect for Context, Transparency, and Focused Collection principles. While Google has apparently taken a cue from Facebook—who at the time of the Shared Endorsements rollout, had just been sued for $20 million for not allowing users to opt out of their similar system, “sponsored stories”—and given users an option to opt out of the service, the implications for Google’s implementation of personal recommendation advertising differ from other social networks. As one news source notes, for example, the victories for consumer privacy gained by the Shared Endorsements opt-out option “may not extend far beyond that,” as, technically, users are opting out of their photo being shown in advertisements, not necessarily in other Google products. [5] In fact, it is due to the diversity, interconnectedness, and sheer number of Google products, that the company is in more in a position than Facebook, for example, to redefine and potentially violate the Consumer Privacy Bill of Rights’ principles of Respect For Context and Focused Collection principles, which qualify the appropriate context and amount of data users can expect companies to collect, use, and disclose.