The need for a narrowly tailored Computer Fraud and Abuse Act

Authors: Derek Kan, Rohan Salantry, Shreyas, Max Gutman

Congress enacted its first anti-hacking law called Computer Fraud and Abuse Act (CFAA) in 1986, which was intended to address federal computer-related offenses. The CFAA was originally enacted to protect against the fraudulent use of federal government computers and computers used in financial institutions; with the recent proliferation of computer technology, Congress recently amended the act to include civil and employment liability. Now, the CFAA, which was initially intended to punish hackers and other digital trespassers who damage computer systems or steal customer information, can also be used to prosecute people inside a company who download sensitive data without his/her employer’s approval or possibly even those who access Facebook during working hours.

Critics argue that the language in CFAA is too vague and that the effect of a broad provision implies that any information access that we perform on the Internet, in violation of a private computer use policy, can be seen as a federal crime. Anyone who uses a computer for private use and different activities on the web might be affected by CFAA; this is particularly important when evaluating on-the-job use of computers by employees. One of the more debated areas of the law is centered on the notion that it is illegal to “intentionally access a computer without authorization or exceeds authorized access.” Can liability be imposed on an employee who is given lawful access to a computer and information on it, but later misuses that information in violation of the employer’s personal use policies? Or does it simply place liability on employees who are not permitted to access certain information, but do so anyway?

Under § 1030(a), an employer seeking reprisal in court must establish that an employee (a) had knowingly accessed a computer without authorization or exceeding authorized access; (b) had obtained private information; and (c) done so with intent to defraud that led to at least $5,000 of loss for the employer.

The U.S. Supreme Court has yet to clarify the scope of the CFAA, leaving companies to plead broad CFAA claims against disloyal employees for restitution, and thus leaving trial courts to tussle with the issue. Some employers have tested how CFAA might apply on former employees who captured or destroyed company information upon termination of his/her employment. The activity of employees in these instances is far cry from the innocuous activity of checking Facebook during office hours, but both fall on the spectrum of access. However, relevant case law has not provided a strong consensus on the matter.

In International Airport Centers, L.L.C. v. Citrin, 440 F.3d 418 (7th Cir. 2006), courts considered whether an employer could pursue action against a former employee for intentionally erasing files from a company laptop before terminating employment and starting a competing business. The Seventh Circuit ruled that an employee who breaches loyalty for the purpose of furthering an act of disloyalty to the employer, becomes “unauthorized” to access the employer’s computer. To the contrary, the court ruled in LVRC Holdings, LLC v. Brekka, 581 F.3d 1127 (9th Cir. 2009) that an employee’s act of disloyalty does not render his/her access to an employer’s computer “unauthorized”. The court found that there is no statutory language to support that authorization concludes implicitly when an employee determines to act contrary to the interest of an employer: “It is the employer’s decision to allow or to terminate an employee’s authorization to access a computer that determines whether the employee is with or without authorization.”

A recent case involving the vagueness of language in the CFAA is United States v. Nosal, 642 F.3d 781 (9th Cir. 2011). As in the previously litigated cases, US v Nosal, involves a company claim that employees exceeded authorized access. The company also claims they were defrauded by the band of employees and Nosal (an ex-employee) who used information stored on company computers to start a new business. The majority opinion of the Ninth Circuit Court of Appeals concluded the definition of “exceeds authorized access” proposed by the US would lead to a broadening of the CFAA to the point where everyday computer acts would be criminalized. The charges involving the CFAA brought against Nosal were dismissed.

What does this convey to us when courts are unclear about what “unauthorized access” means?  Many companies take matters into their own hands and block unauthorized users from accessing specific file directories, intranet pages or Internet content. Unauthorized access in the context of technological enforcement seems clear cut but when employees find ways to breach these access protections to view Facebook, to use g-chat or listen to music during work hours they face the risk of prosecution under the CFAA.

Beyond computer use in the workplace, the effect of the CFAA on consumers is tremendous, as it is applicable to everyone who uses an Internet-enabled device, whether it is from a personal computer, smartphone or tablet. The CFAA has the ability to transform what many would consider to be minor dalliances into prosecutable criminal offences, such as lying about his or her age on social networking sites. Consumers access the internet everyday, in many cases from different devices at once; for consumers to be unaware of the private computer use policies as a whole presents challenges in terms of both adherence and prosecution.

Under § 1030(a)(2)(C) the Justice Department prosecuted Lori Drew, a woman who posed as a 17 year old boy to cyberbully her daughter’s classmate on the social network, Myspace. Like those of other social networking sites, the Myspace Terms of Service prohibit users from falsifying his/her identity and personal information. It has become quite common, however, for consumers to falsify information on social networking sites to invent an impressive web persona or disguise their true identities for security purposes. Despite the fact that many could deem misrepresenting one’s age a harmless act, under the breadth of the CFAA, those who do so could be prosecuted criminally.

Although the government assures us that it would not prosecute “minor violations”, this term is open to interpretation and it is unclear how the term will be evaluated by prosecutors in the future. Recent judgements have adopted a broad interpretation of the statute, as it would criminalize common computer related activities. The Circuit Court suggested that previous decisions, delivered by lower courts, had not considered the effect of the statute on ordinary citizens. Furthermore, they suggest that vague criminal statutes such as CFAA should be construed narrowly, as it is not the court’s position to put words into the mouths of lawmakers. Finally, the courts have also suggested the rule of lenity that ensures that citizens have fair notice about what conduct of theirs would be considered criminal by law.

The ubiquity of computers and the ease by which information can be transported and corrupted by users has led to the enforcement of an act that needs refinement. The CFAA was crafted before the Internet was omnipresent in the workplace. Employees today have access to an array of sensitive company information, leading to scenarios that the writers of the law may never have envisioned. The language defined by Congress at the beginning of enterprise adoption of computers is not sufficient for today’s work environments and the courts are struggling to make sense of it. In many cases, the result hinges on what authorization and access means in the new paradigm of work. The final word will come from Congress, but for now, the most recent cases have affirmed that computer users will not be thrown in jail for violating company use policies or other terms of service agreements.


The Potential of Thingiverse vs. The Threat of History

Authors: Andrew Lomeli & Sasaki Masanori

Thingiverse is a popular website where users can upload original digital design files that can be created by personal 3D printers and laser cutters. While users generally rely on public and Creative Commons licenses, they can choose whichever type of license to apply to their design. The site is most popular in the 3D-printing community, and it currently hosts over 15,000 designs. The open-source nature of the site further allows users to build off of each other’s designs, inevitably resulting in the creation of many mashup and collaborative projects.

This past February, one of the first copyright lawsuits involving 3D printing almost came to fruition, as Artur Tchoukanov sent Thingiverse a DMCA takedown notice after Ulrich Schwanitz posted a design for a 3D Penrose Triangle that Tchoikanov had originally publicized via a YouTube video. Tchoukanov eventually dropped the claim, but the threat of DMCA stalling innovation is ever-present.

Thingiverse provides a unique service that could make the procurement of consumer goods and everyday tools easier. For example, one user with a broken doorknob could log on, download a comparable doorknob design, and print it using his or her personal 3D printer–rather than having to travel to a physical store and paying money out of pocket. The potential is truly limitless as a means for consumers to instantly trade not only ideas but also physical objects across thousands of miles without any of the logistical challenges and fees of shipping.

As wonderful as this service may appear, there are obviously serious copyright issues at play. Much like other file-uploading services, Thingiverse provides a medium for potential infringers to post designs for objects either copyrighted or patented, whether they be a figurine of a popular cartoon character or a practical medical device. This potential for this threat, however, is insignificant, compared to the plethora of other potentially beneficial services Thingiverse may provide.

The Sony v. Universal case is of particular interest to Thingiverse developers, as it provides us with a unique test for determining whether such a service threatens copyright. The case set the fundamental precedent that the service/equipment “does not constitute contributory infringement if the product is widely used for legitimate, unobjectionable purposes.” Home recording devices escaped running afoul of copyright through their ability to fulfill the practical, non-infringing practice of private, non-commercial time-shifting. Moreover, copyright holders could not prevent other copyright holders from authorizing such time-shifting, and even non-authorized recording could be classified as fair use.

Applying this test to the Thingiverse case, we ask ourselves if the file-sharing enabled by Thingiverse fulfills other legitimate, unobjectionable purposes. Moreover, does Thingiverse fully grasp the potential for users to infringe as they log onto the site? What potential safeguards are in place to monitor such practices?

Secondly, we see that cases in which people may build upon a copyrighted design, users are in fact finding a fair use of the design. By adjusting and building upon these existing products, users are in fact transforming it into a different object. Of course, such an application of the fair use case may not always be ideal, but such discussions would at least be thought-provoking and have tremendous implications in the future.

Thirdly, the spread of 3D-printers may bring potential questions related to not only liability rules of copyright, but also liability rules of design patent. As we saw Apple’s patent claims, US design patent covers the ornamental design for an object having practical utility. In coming 3D-printing era, how will industrial design rights be protected?

For example, using a 3D-printer, we will easily be able to copy plastic goods with very low cost. Imagine the situation: a user of Thingiverse bought a neatly-designed smartphone case, which is protected by design patent; if he/she scanned and uploaded its 3D-CAD data on Thingiverse, what problems may arise?  Many people would start printing the smartphone case instead of buying it. In that case, smartphone case manufacturer might sue Thingiverse for second liability for infringement of their design patent.

One problem is that current patent law does not have “safe harbor” rules like DMCA. DMCA sometimes stalls innovation based on web community, but it is also true that it provides “safe harbour” from troublesome liability for web industry such as internet service providers. However, there are no established procedures to avoid second liability for such kind of patent infringement. This can be a serious risk for them and the chilling effect might impede growth of 3D-printing online community.

We must remember the history that intellectual property law has balanced the right against the use along technological evolution. When a pen was the only tool to copy information, we did not have any such copyright rules. After the invention of letterpress which significantly decreased costs of copying books, the first copyright act was enacted in England.

After a long period, when Sony released VCR which enabled copying TV programs and movies at home, the court showed the new balance between the social profit and expense of it. Furthermore, digitization of the information reduced a copy expense to 0; the government and the congress made DMCA to make the balance of digital era. The court also showed new balance in Grokster case.

The intellectual property law and policy has co-evolved with technology for a long time. 3D printing, which is fusional technology of Atoms and Bits, will urge changes in not only copyright, but also design patent. We should keep watching these changes.

Much as the printing press revolutionized the sharing of the written word, the player piano shifted the value of music, the radio made songs more accessible, and VHS and cassette tapes allowed for the easy copying of music, 3D printers and the Thingiverse community is facilitating the easy exchange of simple inanimate objects. And much like all of these revolutionary technologies, this new force is meeting extreme blowback from rights-holders looking to preserve their status quo business practices. These examples ended up surviving the test of time as a service that only expanded audiences, and it will be exciting to see whether Thingiverse will mimic history and how exactly it will revolutionize the market for consumer goods.

Secondary Copyright Liability and the End of Megaupload

By: Julia Kosheleva, Kate Rushton, Ryan Baker, Corey Hyllested, and Christine Petrozzo

Megaupload, a Hong Kong-based online file storage service, was founded in 2005 by Kim Dotcom, a resident of New Zealand. Until recently, the company provided cloud storage, or cyberlockers, to its 66.6 million users and deployed hundreds of servers throughout the world, including the U.S., Netherlands, and France. The company made money by selling advertisements and premium subscriptions. The basic function of Megaupload allowed users to upload files to “lockers”, where the file was stored on a company server. Then users could publish the name of the file and the locker URL on public blogs, from which anyone could download or stream the content. In its Terms of Service, Megaupload required its users to agree not to upload copyright protected materials. The company also claims that it took down unauthorized content when its registered DMCA agent was notified of such material by copyright holders during the years of its operation. It also created a so called “abuse tool” which allowed copyright holders to remove files.

In January 2012, the federal grand jury in Alexandria, Va., charged Megaupload with abetting criminal copyright infringement, claiming illegal distribution of at least $500 million worth of copyrighted music, TV shows, movies, video games, software, books, and images. It claims that 90 percent of the content uploaded to Megaupload was infringing and that Megaupload’s employees were well aware of the fact that the site was used for uploading infringing materials as evidenced by internal email exchanges and chat logs. Moreover, internal communications indicated that employees understood the importance of this content to the success of the business.

Megaupload’s case is similar to Viacom v. YouTube in many respects and it will raise similar legal questions. Does general awareness of the illegal materials disqualify the service provider from DMCA safe harbor protection or is it knowledge of only specific files? Does the DMCA provision speak to “willful blindness”? Is it conceivable that a service provider which receives financial benefits directly attributable to infringing activity qualify for safe harbor? On the other hand, this case will expose additional complex legal issues as it expands beyond U.S. borders and deals with a substantially large volume of known infringing content.

The DMCA’s 17 U.S.C. § 512(c) provision specifies a set of requirements that a service provider must meet in order to be eligible for safe harbor immunity. The first of these requirements stipulates that the provider must not have actual subjective knowledge that the material or activity on its network was infringing, and that it must not be aware of facts or circumstances that would lead an objective, reasonable person to find a likelihood of infringing activity (as interpreted by Second Circuit in Viacom v. YouTube).

In Viacom, the Court of Appeals clarified that disqualifying knowledge must be of “specific and identifiable infringements,” and that “mere knowledge” of the presence of infringing materials on the site is not enough to disqualify a provider from DMCA safe harbor. That court found several internal YouTube communications that suggested that staff members did indeed have awareness of specific cases of infringement, and may not have acted to remove them expeditiously. The court found that YouTube may have been ineligible for safe harbor protection, and remanded to the District Court for fact-finding.

Similarly, the government’s case that Megaupload employees had actual knowledge of specific infringement cases is supported by looking at certain site features and internal communications. One contention is that the site’s “Top 100” download lists were doctored to remove mentions of infringing material, and did not “actually portray the most popular downloads” (Ars Technica). This suggests that Megaupload’s staff may have had direct knowledge that a file was infringing and affirmatively hid it from view. The government also points to internal emails in which employees mention infringing files by name, with no indications of concern or plans to remove them. There are also allegations that employees themselves uploaded a copyrighted “BBC Earth” episode in 2008.

If these facts were presented to the same Second Circuit Court as in Viacom, the court would be highly unlikely to find that Megaupload was entitled to DMCA safe harbor protection, based on the “actual knowledge” provisions set forth in 17 U.S.C. § 512(c)(1)(a).

In § 512(c)(1)(b) of the DMCA, a website service provider only benefits from safe harbor if it has the “right and ability to control” infringing activity, and does not directly benefit financially from the acts of the infringement. According to the Viacom case, the court examined two interpretations of the statutory phrase brought by the defendant and the plaintiff: 1). YouTube doesn’t know of the infringing material, therefore how can it control the activity, and 2). YouTube has the right to remove or delete any of the videos on its servers, therefore it has the ability to control the infringing activity. The same arguments can be made for Megaupload, even though its evident the employees of the company had actual knowledge of the infringing material. The court remained vague on the phrase in Viacom, but suggested there needs to be “something more” other than removing or block access to the content. The “something more”…? The Court never specified, and still remains vague today as its been turned back to the trial court for more fact-finding.


Although there’s still room for interpretation regarding the “right and ability to control”, it can be argued YouTube directly benefitted from the infringing content through its display advertisements. Similarly, it’s been reported Megaupload generated more than $175 million in illegal profits through advertising revenue (Ars Technica). Until the lower Court clarifies the phrase “right and ability to control,” disqualification for safe harbor benefits remain unclear for Megaupload.

According to § 512(c)(1)(c) of the DMCA, a website hosting user-generated content is offered safe harbor as long as it “responds expeditiously to remove, or disable access to, the material that is claimed to be infringing or to be the subject of infringing activity.” YouTube complied with this portion of the DMCA, promptly deleting videos when it received notice of infringement. In fact, Viacom notes that all of the videos disputed in the case have since been taken down. YouTube also developed the ContentID system, which allows copyright holders to scan through audio and video and automatically detect and flag content similar to their own.

In contrast, one of the strikes against Megaupload is that its “Abuse Tool,” allegedly designed to allow DMCA takedown notices, only removed links to infringing content, rather than the content itself. This loose interpretation of the “remove, or disable access to” requirement may prove to be its downfall, because the architecture of Megaupload is such that there may be many links to the same file. The reason for this is technical–if a user wanted to upload a file that Megaupload already had, it would simply provide a new link to the file rather than re-upload it. However, this also means that removing a single link does not remove access to the infringing content, because other links to the same thing still work. As a result, courts will likely find this to be non-compliance with safe harbor provision 512(c)(1)(c).

As far as new developments arising out of the Megaupload case, Kim Dotcom is working on a few projects, Mega, a newer digital locker with more protection for users and MegaBox, a music-sharing service that allows artists to sell their art and reap 90 percent of the revenue.

Mega, which after the shutdown in Gabon ( is slated to go live in January 2013 on the anniversary of the raid on the MegaUpload servers.  Mega, like MegaUpload before it, is a cloud-based service that allows users to upload, access and share files. The new service will also provide a one-click encryption of files. The decryption key will be given to the user, and not stored by Mega. This prevents Mega from being able to review or be aware of what files are uploaded to its servers. Another key difference, is Mega will not remove duplicate copies. Thus, ten uploads of the film “The Big Lebowski” creates ten different copies–each encrypted with a different key. Kim Dotcom maintains this will not be a “middle finger” to Hollywood or the U.S. Department of Justice. It will also allow content creators such as film studios, the ability and access to remove files. Prior to having access to the tool, content creators must agree not to sue Kim Dotcom or the Mega service. The EFF’s Julie Samuels hinted its just the next iteration of a cat-and-mouse game on the Internet.

MegaBox, the not-yet-release music service, will allow users to purchase music or they can install a tool that will replace up to 15 percent of their Internet ads with ads provided by MegaBox.

205 in the news article:

Other sources:

Patent Law: A Hurdle for the Software Industry and Innovation

By: Divya Karthikeyan, AJ Renold, Ashley Christopher Bas Desouza, Haroon Rasheed, and Jake Hartnell


The purpose of a patent is to provide incentive for technological innovation, not only to conceive of an invention, but also to allow the idea/concept to enter the market, so that a substantial benefit can be derived by the public. Yet in light of the Information Revolution, software has presented a unique challenge to the US Patent System. As more and more of the world economy conducts its business online and more and more people become dependent on software in one form or another, the importance of software patents as a means to differentiate oneself in the marketplace has become proportionally important. However, many have come to view current patent law as problematic to the software industry and a hindrance to innovation. Justice Kennedy noted in Bilski v. Kappos, ‘With ever more people seeking patent protections for their inventions, patent law faces a great challenge in striking the balance between protecting inventors and not granting monopolies over procedures that others would discover by independent, creative application of general principles.’ This balance is very much under threat in the Software industry.

Who benefits?

Under the current system, it has been found that the substantial benefits of software patents have been accrued by a few individuals or corporations. Lately there have been a number of high profile cases involving large companies—Apple v Samsung or Apple v Motorola—suing each other for patent infringement. While these “Patent Wars” generate a lot of interest in the media with commentators lamenting the potential damage to the idea of innovation, these large companies usually reconcile with each other through cross-licensing of their respective patents. They escape most of the trouble patents cause, while enjoying a large share of the power patents confer. This is why the chief supporters of software patents are multinational corporations. They have a great deal of influence with governments.

Large companies also force small companies/developers to cross-license with them. Because of their influence and deep pockets, large companies often try and succeed in getting their features incorporated as industry-wide standards. An example of this took place in the late 1990s when Microsoft, at the time at the peak of its powers, menaced the online community, by obtaining a patent affecting style sheets—after “encouraging” the W3C to incorporate the feature in the standard. This was not the first time that a standards group has been lured into a patent’s maw. Public outcry on that occasion forced Microsoft to refrain from enforcing this patent; but other large corporations might not be so merciful because the economic benefits of enforcing a patent are simply too great.

While small companies can benefit from a patent, they often do not own sizeable patent portfolios. Thus, a large or complicated product risks infringing some previously issued patent. In addition, it is a lot harder for a small inventor to obtain a software patent. With the many recent Supreme Court cases, software innovators now must adhere to various new laws and regulations before obtaining a patent, increasing the costs for preparing and obtaining software patents. A small inventor usually has a difficult time commercializing their products and enforcing a patent. They also lack the resources to track every instance of patent infringement or defend their patents or file petitions to invalidate dubious patents. Litigation is quite complex, even without the difficulty of explaining technologies to juries and judges. All these factors contribute to reducing the incentive for smaller developers to innovate—a departure from the intent of the framers of the patent system.

It is in this environment that the 2nd beneficiary of the current patent system steps in – The “Patent Troll”. These are basically individuals/companies/groups who purchase software patents usually from small inventors who are unable to or unwilling to bring their products to market. Their rationale is that they are leveling the playing field between big companies and inventors and small patent holders, who previously “were getting their rights steamrolled by big companies.” The small inventor gets compensated for his/her work and is not bullied by the big companies to cross-license. Surely, this gets us back to the intent behind patents – i.e. innovation and promoting science and technology. However, a closer look at the business model of the Trolls reveals that they are more opportunistic than altruistic. Trolls run a virtually risk-free business. They buy patents and enforce them through licensing or litigation. They don’t build products but earn revenue by suing others, a very lucrative business model indeed. The key point is that the licensing fee should be low enough to entice the “infringers’ to settle rather than go through messy litigation. They also don’t discriminate between which companies to target. Large companies prefer to settle to avoid unwanted publicity while small companies do so out of compulsion as they lack the resources to defend themselves.

One such Patent Troll is Webvention, which has a patent (United States Patent No. 5,251,294) for the functionality known as a “mouse over” or “preview” that displays a short summary of the information to be found through an internet link when a computer user points the mouse or cursor at the link. Webvention “claims that 339 companies, including Google, Nokia and Orbitz, have paid the one-time licensing fee of $80,000” over the 4 years it has owned the patent. Yet another troll is “PersonalWeb Technologies.” which sued Rackspace for hosting GitHub service. This case is a prime example of the deficiencies of the current Patent Law. GitHub is the one doing the innovating while PersonalWeb is taking advantage of a bad Law for profit. It is uninterested in innovation or the public good.

In the hands of the Trolls, Software patents are weapons of mass extortion. Indeed, there are many technologists who argue that software patents are deleterious to the public good and should be abolished altogether.

Consequences and losers:

The increasing number of Patent lawsuits within the software and technology industry has created a chilling effect on innovation that the media has been quick to point out. In his recent Wired article on software patents, Richard Stallman, a programming freedom advocate, opines that patents protecting computational ideas are stifling the development of software. In part this is due to the fact that software draws on thousands of ideas, many potentially patented designs or computations, to create innovative products. Stallman also asserts that the patent system produces ‘bad quality’ patents, which we will later explore in SurfCast’s recent lawsuit against Microsoft. (Wired Article). Ultimately, the volume of software patents and the volume of patent litigation create huge legal costs and can stifle innovation for both small and large corporations.

For Small corporations, with limited capacity to defend their new software products, the mess of software and design patents creates a blind spot to potential lawsuits from large corporations and patent trolls. A 2011 lawsuit filed by Lodsys against multiple iOS and Android developers alleging their infringement on a patent covering in-app purchases is believed to have targeted small developers in hopes that they would settle for a small licensing fee. Relative to the income of app developers, the licensing percentage would have been less than the cost of litigation and a small business might opt to settle a lawsuit without questioning the validity of the patent. (Lodsys article) In a 2008 case Vlingo was sued by Nuance over five patents relating to voice recognition technology. Vlingo won the first case at a cost of $3 million, yet ultimately sold its business to Nuance because of the financial risk in defending the four remaining patent suits. (NYT) The small innovator’s patent litigation blind spot and their lack of defensive capacity disincentives risk taking and innovation, a result contrary to the objectives of U.S. Patent Law.

Large corporations developing new software and technology products also have a complicated position in software patent law. The well-publicized ‘patent-wars’ have dramatized actors such as Apple, Samsung, Google, and Microsoft, as assembling arsenals of patents and legions of lawyers. Their deep pockets also attract lawsuits from patent trolls, as we will later examine in Surfcast’s suit against Microsoft. The risk of patent litigation has led them to adopt a policy of ‘patenting everything’, as Steve Jobs famously ordered Apple executives, and the rise of the ‘patent everything’ policy likely contributes to the volume of bad-quality patents pointed out in Stallman’s Wired article. (NYT) From the perspective of a manager or developer within a large corporation, the knowledge and responsibility of having a product be the target of scrutiny by corporations possessing arsenals of patents undoubtedly creates a chilling effect on risk taking and innovation. A manager who is not careful might put their job on the line if their product attracts costly litigation.

Taken together, the costs of software patents to small innovators and large corporations alike are estimated to create a so-called patent tax on software and technology products. Boston University researchers estimate that this adds as much as 20% to companies’ research and development costs. (NYT) At the end of the day, these costs are passed to consumers in the form of increased prices and fewer products due to slowing or foregone innovation.

Problems and challenges in resolving patent infringement issues: SurfCast v. Microsoft

There is doubt as to whether the current patent laws are the best suited to resolve software patent issues. Following the launch of Windows 8, which includes self updating dynamic ‘Live tiles’ displaying content from various sources, a Portland based company SurfCast filed a patent infringement claim against Microsoft stating that the concept of tiles was covered in their patent in 2004. To Microsoft’s defense they also had a patent for their ‘Live Tiles’ in 2011.

This brings us to one of the most pressing challenges facing the patent system. The patent office does not determine if an invention to be patented infringes any prior patents. So it is extremely difficult to know during the patent filing process, if a patent infringes prior patents. This being the reason that both SurfCast and Microsoft held patents for the invention of tiles showing dynamic content. In this case when a patent infringement issue comes up, determining whether the two ideas are similar is a complicated problem.

From section 101 of the patent laws, given that ‘Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.’ justifies the patent approval for both the companies as there was innovation in both patent claims. It is not obvious if the fact that SurfCast did not use the idea to produce a product be used against it thereby creating an ambiguity resulting from the inability of current laws to satisfactorily resolve software related patent issues

Microsoft has cited SurfCast’s patent (6,724,403) under the References in their ‘Live Tiles’ patent. So it is obvious that Microsoft was aware of the technology that SurfCast had patented. SurfCast alleges in the lawsuit that Microsoft was aware of the ‘403 patent as early as 2009 much before Microsoft patented the Live Tile design. Sometimes the existing patents are very vague and cover a broad domain. It is difficult to know if someone has already patented a technology that you are independently working on. Unfortunately, independent invention cannot be used as a defense for patent infringement. Companies are often sued for violating patents that they never knew existed.

Advertising Counterfeit Goods: Who is Liable?

By Jenton Lee, Christina Pham, Raymon Sutedjo-The, Vaidy Venkitasubramanian, Andrew Win

What is this article about?

Selling sports apparel can be a lucrative business, as sports fans are typically quite loyal and willing to spend money to showcase their allegiance. Businesses can advertise quite effectively using social media channels, such as Facebook. These platforms, however, open the gate for everyone and make it possible for dubious companies to advertise counterfeit goods to potential customers.

Krystal’s NFL Shoppe sells official National Football League apparel and advertises on Facebook. The company is suing Facebook for failing “… to take any measures to curb or stop the placement of fraudulent and illegal ads on its website” and it’s planning to bring a class-action lawsuit on behalf of retailers and wholesalers of official NFL clothing. Krystal’s also named adSage and, an ad shop and e-commerce company, respectively, as defendants in the lawsuit.

This example relates to the issue of liability for content created and/or posted by others discussed in this week’s reading. We will examine the lawsuit and try to determine its validity by applying Section 230 of the Communications Decency Act (CDA) as well as comparing it to past cases such as Fair Housing Council (FHC) vs., Batzel v. Smith, and Carafano v.

Why is it relevant and important?

A big controversy that often comes up with cases involving the CDA is the possible harm that could come to users. This is due to the fact that providers are immune from liability for third party content they provide. In this case between Krystal’s and Facebook, not only are the users harmed from having purchased fraudulent products but Krystal’s also suffers from lost profits from advertising their wares at full price. This raises concerns about who is ultimately liable for the harm that comes from third party content providers through uninvolved service providers.

But is Facebook truly uninvolved? According to the CDA, service providers will be immune from the liability of the content provided by third party sources. In this case, to answer this question one needs to examine whether or not Facebook is an active participant in publishing and distributing content. Krystal’s accusation against Facebook is that they are more than just a passive party in this case due to their sophisticated data mining technology that can route targeted ads to a certain user-base. They believe that the ability to target the ads to certain users labels Facebook as an active publisher of content, since their “Create an Ad” feature offers pre-set options to target groups of users.

This is akin to the Fair Housing Council (FHC) vs. case in that Roommates enforced implied content through their form, where subscribers filled out contents based on the structure of what the forms was asking for. This went against the Fair Housing Act, where landlords and sellers of housing properties weren’t allowed to discriminate against renters and buyers. However, unlike the FHC vs. case, Facebook doesn’t force the advertisers to fill out the very specific elements of the ad. Facebook themselves have no control over whether or not the information that third parties provide are fraudulent or not.

What questions does it raise about the law and what problems arise from this law’s interaction with technology and society?

Looking at previous cases, we have seen that immunity was often granted to big companies like eBay and Google, which raises the question of whether or not the law unfairly protects big companies from being liable for such activities. Even if Facebook can’t be held liable for publishing fraudulent advertisements on its page, the question of who will protect the user from harm remains.

Facebook does not make a choice of displaying or promoting a particular advertisement and this might be enough to grant them immunity. If they get two ad requests for fraudulent jerseys and authentic jerseys, Facebook doesn’t make an editorial decision to promote one over the other and it is the user who makes the choice of clicking on the ad. But one must think of the role that Facebook has in influencing the user. The mere association with Facebook might give some credibility to the ad posted and the user might choose to go with a counterfeit product.

Section 230C of the CDA says that the provider of the interactive service (Facebook) shall not be treated as the publisher of any information provided by another content provider

Treatment of publisher or speaker–No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

In this case, AdSage is the company that managed the ads, and DHGate is the distributor. One or both of them may be liable for creating the fraudulent content, but we don’t have enough information to determine who’s liable. Ultimately, the harm is done to the user and the law doesn’t do enough to protect the user from it.

How might the problem be decided if the law was interpreted in the same way as it was in the legal opinion/materials covered that week?

Perhaps the most important test that pertains to this case is the three-pronged test which determines whether or not the defendant is granted immunity from liability under section 230 of the CDA. In order to gain immunity, the defendant must satisfy each of the three prongs.

The first requirement of the three-prong test is that the defendant must be a “provider or user” of an “interactive computer service”. Facebook fulfills this first part because, as an “interactive computer service”, it provides a platform for users to interact and connect with each other and for advertisers to reach out to a large user base.

The second part of this three-prong test requires that the plaintiff perceive the defendant as the publisher or speaker of the harmful information at hand. In this case, the plaintiff, Krystal, argues that Facebook, the defendant, is “more than a ‘passive party’ to the ads because the company allegedly makes use of sophisticated data mining technology to assist the counterfeiters” (MediaPost Article). Therefore, Krystal’s is claiming that Facebook is a publisher because Facebook is using the data to specifically target the counterfeit ads to users who would normally be buying official jerseys.

The final component of the immunity test requires that “the defendant must not be the information content provider of the harmful information at issue” (Wikipedia – Section 230 – CDA) and that the offending information must come from a third party. Here, as Facebook did not provide the information regarding the counterfeit jerseys, it is not reasonable for them to be labeled as the information provider. Rather, it was AdSage and that acted as information content providers, with Facebook acting as the platform on which the two companies can advertise their information.

As a result of fulfilling each of the three prongs as required by the courts, Facebook should be provided immunity from liability as stated by Section 230 of the CDA.

Legal Precedence

Additionally, we can draw guidance from the legal precedents set in earlier cases that dealt with section 230 issues. There were three key points made in Batzel vs. Smith, that relate to the issue at hand:

  • …if the tipster tendered the material for posting online, then the editor’s job was, essentially, to determine whether or not to prevent its posting—precisely the kind of activity for which section 230 was meant to provide immunity.
  • …any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230.
  • If the editor publishes material that he does not believe was tendered to him for posting online, then he is the one making the affirmative decision to publish, and so he contributes materially to its allegedly unlawful dissemination. He is thus properly deemed a developer and not entitled to CDA immunity.

Facebook’s entire advertising platform is built on the intention of having third parties post their ads online. If Facebook starts making decisions as to which ads should be posted and which should not, Facebook runs the risk of “making affirmative decisions to publish”, which would make them a developer. In the Betzel case, the Courts made it clear that if the defendant is seen as being a developer, then that would be grounds for the revoking any immunity from liability. But since Facebook isn’t making any editorial decisions in determining which ads to post, they remain neutral and maintain their status as an interactive computer service, not an information content provider. As a hypothetical scenario, if Facebook were to actively promote the counterfeit NFL jerseys over the real jerseys, then it could be argued that Facebook is indeed acting as a “developer”, and should not be granted immunity under Section 230.

To further drive the point home, we reference Carafano v. In the case, the Courts noted that the dating website (Metrosplash) was “merely a passive conduit” and “did absolutely nothing to encourage the posting of defamatory content”. Furthermore, the courts deemed that Metrosplash was not a content provider, but just an interactive computer service. With this distinction, Metrosplash was granted immunity under Section 230 of the CDA.

With respect to the counterfeit NFL jerseys, we argue that Facebook had done nothing to encourage the posting of fraudulent content. In fact, nowhere in the ad-creation form does Facebook ask whether or not the advertised item is fraudulent. As there is no way of distinguishing counterfeit items from authentic items on the form, we conclude that Facebook is taking a neutral stance, thereby strengthening the claim that Facebook is just a “passive conduit” in allowing advertisers to create an ad on Facebook.


On one hand, Krystal’s NFL Shoppe has a legitimate concern in trying to prevent other companies who are selling counterfeit goods. But using the framework of Section 230 of the Communications Decency Act and the legal precedent of prior cases, we think that the courts that will be deciding this case will most likely rule in favor of the defendants.

Your Printer is Like iTunes: DRM & the 3D Printing “Revolution”

By Fred Chasen, Isha Dandavate, Sydney Friedman, Vanessa McAfee, Morgan Wallace

So how is your printer like iTunes? Well, it isn’t yet. But as 3D printing has entered the market, it opens up a host of legal questions that are similar to those faced by applications and hardware companies that facilitate the consumption of mp3s. So, just as the courts use past precedents to address unfamiliar issues brought about by new technology, we consider previous approaches to DRM and personal use to shed light upon the opportunities and potential copyright issues brought about by 3D printing.

What’s the issue?
Advances in 3D printing and modeling have allowed people to easily use a personal computer for the purpose of creating physical objects. With these advances there is a growing market for “recipes” that a person can download and use as blueprints for printing. So, the issue is whether the sharing and use of these downloaded recipes files could potentially infringe upon copyright, and how digital rights management (DRM) should/could be leveraged to bring a balance between consumer expectations and vendor copyrights.

The exact way in which the DRM will function for these recipe files is still mostly theoretical. Intellectual Ventures has been awarded a patent on 3D printing rights management, but the company has not actually created a DRM system. From their track record as alleged “patent trolls,” it seems that Intellectual Ventures is banking on interest from other companies who will license the patent to create such a system.

Why is this important?
As 3D printing penetrates the mass market, it could potentially impact the market for a wide range of products, assuming that anything from guns to baby strollers could be printed. Giving the mass market an ability to produce objects once available only through manufacturers and retailers raises questions around copyright, DRM, and expectations for personal use.

The question also arises, what happens when a printer is able to make derivative works? This problem arose in this week’s assigned reading, when the Kindle 2 added a text-to-speech feature that could potentially replace the audiobooks market. In the context of 3D printing, would DRM prevent users from excerpting or transforming parts of the recipes? And does this functionality fall in the realm of reasonable expectations for personal use?

As we see in the entertainment space, efforts have been made to outline reasonable expectations of personal use. Mulligan et. al. argue in “How DRM-Based Content Delivery Systems Disrupt Expectations of ‘Personal Use’” that the music DRM has handicapped personal and fair use by limiting sharing and other functionality necessary for engaging with music. So we can conclude that there is a paramount need for evaluating reasonable expectations of personal use in the context of 3D printing in order to ensure that copyright interests are reconciled with consumer expectations for use.

Arguments & Implications

The scope and nature of 3D printing is arguably larger and more varied than that of music consumption– there are a limited set of ways in which people can consume music, but with 3D printing, the types of objects that could be printed are potentially infinite. Accordingly, the expectations of personal use of 3D printers would be varied depending on what is being printed. This makes it difficult to assess the reasonable expectations of personal use for the act of 3D printing.

So we break down the issue at hand and consider, most specifically, protection of the copyrights on the files (recipes) used to print rather than the printed objects themselves, therefore, considerations from DRM in the entertainment industry could be applied here. DRM, if implemented for these files, will likely prevent unauthorized sharing of a recipe, and place a limit on the number of times a recipe can be printed and on how many systems the recipe can be used on, just as music DRM does now. We argue that the future DRM system should be implemented in a way that is consistent with consumer’s expectations of personal use which, as defined by Mulligan et. al., include portability of files, excerpting and modifying files, and a limited relationship between users and copyright holders.

It is reasonable to expect that consumers will want to treat their 3D design files as they do other files on their computer. Portability will be important as consumers might expect to be able to e-mail designs to friends and family, store design files on their mobile phones or cloud storage systems, or use multiple 3D printers. It is also reasonable to assume that consumers may want to modify designs by either changing the physical properties of an object such as size or type of material. If DRM is implemented, designers should consider these expected personal use cases and implement the system in a way that prevents users from unknowingly violating copyright law.

Mulligan et. al. also discuss the shift in privacy under new DRM-based systems.  Instead of purchases being nearly anonymous to the copyright holder as they are in cash purchases from a distributor, information is collected directly by copyright holders.  In the DRM-based environment, copyright holders will construct business models around their interest of selling licenses, not physical goods.  The area of digital printing can easily go this way as well.  If implemented, users will have to go directly to the copyright holder to seek permission to print an object.  The copyright holders will soon be able to evaluate, take statistics and tailor their models to this new market, which will surely clash with distribution models seen previously.  Therefore, consumers will have to adjust to new expectations of privacy under the new copyright holder-led model.


As argued by Mulligan et. al., DRM must be refined to reflect the balance of copyright law. While courts wish to further progress of science and the useful arts by allowing authors to protect their work, there is also a limitations on the scope of copyright protections for the sake of consumers. Designing a system that is better tailored to suit the interest of both parties would include allowing for file sharing between parties (in a limited, reasonable way), allowing for “unlimited” file copying in ways that still create time and energy costs for users, allowing for transformation of the recipes created, and als protecting user privacy.

Like the development of the mp3 player, 3D printing will make possible a host of ways in which users and copyright holders can interact and transact business. However, it could also encourage copyright holders to implement stringent limitations on user experience. By more clearly defining legally reasonable expectations for personal use, technologists, policy makers and the courts will protect the balance of copyright law.

The Tussle Around Fair Use: Is Pinterest On The Right Track?

Prabhavathi Matta, Bharathkumar Gunasekaran, Ignacio Pérez, Shaohan Chen, Eunkwang Joo

 Images are the next big thing on the Internet. With constrictions on the number of characters in other social networking sites, images are a way to express yourself in a succinct way. It is precisely for this reason that Pinterest has gained widespread prominence on the Internet as an online image-sharing network. Users pin their favorite photos that they find online or upload themselves to share it with other pinterest users, who then re-pin to share them. Nielsen reported that Pinterest had nearly 16 million unique visitors in January in the United States alone, double the number it had in November last year. Most of the sites users are female.

After having extended its market reach, it is natural for a web startup to figure out a way to bring revenue. In the case of Pinterest, this has already begun with their pins linking to commerce websites and Pinterest getting a share for the referral. But what happens when the image used by the user was a copyrighted to an artist. As one can imagine, the site’s utility opens itself up to a lot of copyright violations by its members. This causes a lot of headaches for photographers and artists. This is where we can draw a parallel between P10 vs. Google case. In this blog we will strive to find parallels between Google’s and Pinterest’s usage of images and the potential implications. It is also important to consider that there is no lawsuit  against Pinterest at this time, but the nature of Pinterest service raises a lot of legal questions which we discussed below.

In the similar way as Google’s image search service, Pinterest displays copyright images without a direct approval of the authors and, most of the time, without even the knowledge of the authors. But Pinterest is already looking for a way to avoid legal problems and several changes have been made in the terms of services. In March of this year the website change its terms of services in a proconsumer way, the website will no longer sell users’ images, but the website is now delegating all copyright infringement responsibility on the user.

To establish direct copyright infringement, a plaintiff must prove two elements: (1) ownership of a valid copyright, and (2) violation of one of the exclusive rights granted “under copyright. Considering Pinterest directly infringes, in that it displays the full-size images hosted by third-party websites, it is important for Pinterest to analyze its service in terms of possible vicarious and contributory liability theories. But both “fair use” and “public domain” are gray areas, making it difficult to ensure that Pinterest liable or not by infringing the rights of others.

Fair Use doctrine permits use of copyrighted material for different purposes  – news reporting, search engine, research, teaching – the main purpose being stimulation of creativity for enrichment of the general public. We can look Pinterest as a search engine for images as of now, since they are not promoting any specific images for profit yet.

If we analyze under the the perspective of the fair use statements, Pinterest have some favorable points, but others clearly fall in gray areas. For example, under the topic of the purpose and character of the use, Pinterest asks users to comment on the image while pinning. The question arises whether the simple act of re-pinning and commenting on a work can be considered creativity. We believe that this“commenting act on the pin” does not add any transformative value to the picture.  And there is a huge difference with other types of comments: When someone reviews a  movie, the comment does not involve the full reproduction of that particular movie, but there is a full reproduction of the photography in Pinterest.

In Perfect 10 vs Google case, the court ruled that scaled/thumbnail images used in a Google Image search engine constituted fair use. Pinterest shows thumbnails of the images in the boards that any user can create, but after clicking any of those thumbnails, a full version of the image is displayed. In this full reproduction of the images, there is no clear references to the ownership or copyright of the images. Besides when the original image is deleted, the copy of the image on Pinterest’s server is not deleted.

Regarding the point whether Pinterest usage of photos is applicable to Commercial or Noncommercial topic, Pinterest business model does not include the concept of adwords or adsense, but they are creating a huge database with the opinion of users over that copyrighted images, and this situation falls in a gray area for two topics: the transformative work vs. the commercial use of that database. Considering that Pinterest goals are focused on create a huge database of images with the user opinions, this is much less broader than Google’s goals, which is to organize world’s information. And considering the use of the image in this way could be interpreted as a direct use, this could be a hard point to avoid for Pinterest’s lawyers.

In terms of the nature of the copyrighted work, it is easily conceivable that photography is art. We know that the courts tend to give greater protection to works of art. This factor weighs heavily against Pinterest. Since images can be considered as artistic expression, it cannot be considered fair use.

With respect to the amount and substantiality of the portion used in relation to the copyrighted work as a whole, In Perfect 10 vs Google, the court also agreed that including an inline link is not the same as hosting the material yourself. In the case of Pinterest, the images are copied 100% and the pinterest image does not differ at all from the original image. This also cannot be considered as fair use.

Finally, we should consider the effect of the use upon the potential market for or value of the copyrighted work. It is not easy to establish what could be the possible potential market for an image. In the case of Pinterest, its most plausible business model aims to find what it is interesting for the users and offer them the right product. But an image could have different artistic purposes. Is Pinterest impeding the artistic intention? The most likely answer for this question is that Pinterest is not interfering with the original purposes of any photography, but it is not compensating for the use of the photo in a different context.

According to Pinterest terms of use, the user is responsible for what is posted:

You acknowledge and agree that, to the maximum extent permitted by law, the entire risk arising out of your access to and use of the site, application, services and site content remains with you.

This part of the terms of service helps to relate Pinterest’s possible legal problems with previously “convicted” websites that were dedicated to allows users to share information, like Napster. Pinterest knows that its service can be used for copyright infringement, so it is clear that they fall in the category of “facilitating copyright infringement”.

Regarding vicarious infringement, in the case of Perfect 10 vs Google, the District Court states,

To prove vicarious infringement, P10 must show (1) that Google enjoys a direct financial benefit from the infringing activity of third-party websites that host and serve infringing copies of P10 photographs and (2) that Google has declined to exercise the right and ability to supervise or control the infringing 857*857 activity.

It goes on to compare it with the case of Napster. For the first part to judge vicarious infringement, the court concluded that “Google clearly benefits financially from third parties’ by displaying P10’s photos” by utilizing AdSense. For the second part, the point is “the ability to block infringers”. Napster’s architecture was based on a proprietary, closed-universe system, not an open, web-based service. It can control content  and once it removes the content which infringes copyright from its own website, the content becomes inaccessible. In contrast, Google cannot control over the whole Web. Google can only remove links of infringers yet the site is still there and is accessible by other means. Besides, Google cannot analyze every image in terms of infringement of content. For the lack of the “right and ability to control” infringing activity of others, Google is not responsible to vicarious infringement.

Returning to Pinterest, even though it does not earn any financial benefit by operating the website for now, there is a possibility that Pinterest can make revenue afterwards. It can be deduced from the P10 v. Google case, that the benefit would be admitted as direct financial benefit if it contains any infringing images on its website. Regarding the second part, Pinterest is not able to analyze every image it has in its website in terms of infringement similar to Google. The rate of growth in the number of images is high which causes Pinterest not to be able to identify each image. Pinterest, however, is a “closed-universe” system as Napster was. It stores images and other metadata on its own servers so that it can handle any content on its website. If Pinterest removes a link to any image, the image no longer exists in the world. For this point Pinterest would have the responsibility to delete infringing contents when it is required to block. Hence, if Pinterest starts to earn financial benefit from infringing content, the responsibility of vicarious infringement will follow as well.

The ease with which information can be copied over the World Wide Web makes it difficult to protect copyrighted content from infringement. The blurry nature of the contours of laws governing copyrighted content on the net poses a lot of challenges for companies whose business model revolves around user generated content.

The main difference between Google and Pinterest is their ability to have control over contents. If one has the ability, it is assumed to have the responsibility to control infringement. For this reason it is expected that businesses that depend on user generated content provide service without storing the contents in order to avoid any risk. Yet another strategy for the corporations is to try and cooperate with copyright holders and share their profits by advertizing the contents stored.

Finally, to think about the current and potential problems arising from copyright law’s interaction with Pinterest’s technology and society, we analyzed these problems based on different point of views from different types of stakeholders in Pinterest’s world. There are three different groups of players that formed the Pinterest: 1) Content owners, who own the copyright of the contents; 2) Pinterest users, who use Pinterest’s service to create their collections of images they like; 3) Pinterest team, who provide the service to run the business.

There are three main different types of content copyright owners in Pinterest’s case: Pinterest users, professional photographers, and business associations such as e-commerce retailers. In the case of Pinterest users and business association, they are more willing to pin their own contents in order to show the world and try to receive benefits such as acquiring more “Likes” or attract potential customers to purchase their products through the linked images. For professional photographers, however, the case becomes different. Since most of professional photographers sell their work to sustain their business, the service that Pinterest provides might have the potential risk of decreasing people’s willingness to purchase their work. Unlike other thumbnails cases we are going to discuss this week, the quality and pixel on Pinterest’s website is much clear and sometimes can also be reused if users intend to do so.

Even though Pinterest has provided the solution (to insert some HTML codes Pinterest provided into the web pages) for professional photographers to prevent users from pinning their work, there is still questionable since professional photographers cannot prevent users from taking screenshot of their work them post those images onto Pinterest board by using upload tools which Pinterest provided. Besides, in the long term, we also doubt if there will be other legal and/or ethical problems once Pinterest switches their business model and try to acquire profits from these contents.

After all, copyrighted contents play really important roles not only in Pinterest world but also in our real world, and how to maintain a fair and sustainable ecosystem of sharing will be the next biggest challenge for this fast growing company.

Digital Copyright in the Internet Age – Google, Publishers Settle Book-Digitization Dispute

Authors: Luis Aguilar, Ajeeta Dhole, Kate Hsiao, Matthias Jaime, Lisa Jervis


This week publishers and Google reached a settlement in the long-running copyright infringement case, according to

Google Books, formerly known as Google Print, is a service from Google Inc. that searches works that Google has scanned and stored in its database. Back to 2005, writers and publishers sued Google, claiming that digitizing the world’s books was an infringement of copyright. However, Google claimed that scanning and indexing the copyrighted works without the copyright owners’ consent should be seen as fair use.

After years of debate, the settlement with Google and the publishers finally turned out. The settlement allowed the U.S. publishers to choose to make available of or remove their works in Google Books. When it comes to the selling on the Internet, the rights holders will get 67 percent of the take and Google gets the remainder. (The dispute with authors is still outstanding.)

Why It’s Important

The most immediate consequence of this settlement is that it will allow Google to start selling digitized books on Google Play and display 20 percent of the text . The agreement will include works from some of the biggest names in publishing including McGraw-Hill Companies, Pearson Education, Penguin Group, John Wiley & Sons and Simon & Schuster. For consumers and copyright holders then, it would seem this result will be mutually beneficial.

However, this settlement is an extension of a previous agreement rejected in Federal courts last year. In that agreement, the issue that led to government intervention regarded orphan works, books whose copyright holders cannot be found.  Under the original settlement, Google had claimed orphan works are too available under fair use and would index and make them available for sale online. Judge Chin who oversaw the settlement disagreed, stating, “the ASA would give Google a significant advantage over competitors, rewarding it for engaging in wholesale copying of copyrighted works without permission, while releasing claims well beyond those presented in the case.” (pdf)

With this new settlement, the issue of orphan works is still not resolved, nor are Google’s claims to access copyrighted books under fair use. Fair use is defined as the use of a copyrighted work for purposes such as criticism, comment, news reporting, teaching, scholarship, or research, and it is not an infringement of copyright. It remains to be seen if Google can successfully claim its actions comply with the court’s interpretation.

Point Counterpoint

Why the Google Settlement is Bad

Though it doesn’t carry the weight of law or legal precedent, the overall message
of this settlement is that Google is an 800-pound gorilla that can successfully
leverage illegal tactics to make business deals. Perhaps even more important, their
contention that the Library Project scanning was fair use will not be tested in court.
And the fair use argument here is unconvincing: Section 107 paragraph (1)’s test
of purpose of copying is not met; adding massive amounts of valuable copyrighted
content to its search results clearly serves a business purpose (this contention is
buttressed by the part of the settlement deal allowing Google to sell the digitized
works in the Play store). Neither is paragraph (3)’s metric of the amount of work
used; a strong argument can be made that twenty percent of a book is a substantial
portion. Paragraph (4), concerning effect on the market, also poses a problem
for Google’s argument, a providing that much content for free surely diminishes
customer demand.

An actual court test of this argument would have been helpful for copyright holders
to maintain their rights to control their content when that control is under constant
threat by the fast pace of technological change. If decided properly, it would have
established a legal environment that encourages content-sharing partnerships
negotiated up front, rather than an approach that rewards infringement with a
better negotiating position.

Why the Google Settlement Is Good

While the publisher-challenged “fair use” scope of the lawsuit may have been too broad since it included any book types, we cannot disregard the helpfulness of Google’s scanning and indexing technology for one very specific fair use purpose: education. Without such indexing, if (for example) a specific difficult programming concept emerged, a student generally has only had the class’s assigned book to aid in understanding the concept. If the book doesn’t help, it is a tedious process to find other reference material due to the specificity of the concept, the lack of adequate search capability for reference material, and cost. 17 USC § 107 states “…the fair use of a copyrighted work, including such use by reproduction in copies…for purposes such as…scholarship, or research, is not an infringement of copyright.” Google’s scanning project provides a technology whereby it not only becomes easier to find more relevant material because of the more powerful indexing and breadth of what’s available, it also allows review of reference introductions and indexes which will give a much better indicator as to the depth of the explanations and audience level. This is not to concede that Google’s “fair use” defense is a valid one, since their scanning and indexing technology was created to bust into the e-book market like Kool-Aid Man into a late ’70s living room. But it is a strong defense of their technology, which replaces woefully inadequate methods that help students find relevant material that crosses business, library and publisher borders.


While the settlement has broad implications in the overall message that it conveys, it is important to recognize that this particular settlement is only limited to the works of select publishers that are party to the settlement. (The dispute with some authors, through the Author’s Guild, is still ongoing.) It allows Google to continue digitizing works of the said publishers, gives the latter the option of not making their works available to Google, and possibly establishes a business contract between the two parties that allows both to commercially benefit from the sale of the digitized books via the Google Play store. The Google Play store provides the publishers an established and reputable platform as an additional avenue to market and sell their works for improved sales and profits. The settlement does not give Google the legal rights to monetize copyrighted works that are outside the purview of the settlement. The publishers involved in this settlement are heavyweights of the publishing world, and if they so chose, they could have contested the settlement further, on an equal footing with the behemoth that is Google. Also, while Google has an advantage in that it has a head-start in the digital publishing landscape due to its ongoing digitization efforts since 2005, it cannot be termed as an unfair advantage, as this or similar settlements do not preclude any other Google competitor from negotiating similar or better deals with the publishers.

However, as discussed earlier, the settlement also means a lost opportunity to legally test and answer some important questions around digitization of copyrighted works—whether Google was originally, legally justified under the ‘fair use’ clause in its unauthorized reproduction and digital publishing of 20% of the copyrighted works, especially as Google is a for-profit company and, regardless of their stated intent of creating “a universal digital library with a searchable index for a wealth of new information”, indexing information contained in the books would clearly improve their search results and benefit them commercially. Also, the settlement does not resolve this dispute for other works that are not included under this settlement, or provide legal answers on who owns the copyright for ‘orphaned works’ whose copyright has expired or is unknown. We look to the pending litigation of Google vs. Author’s Guild to provide more definitive, legal answers on these issues.

Privacy Rule to Protect the Young

Week of 10/1 On-Call

Author: Divya Anand, Wendy Xue, Sayantan Mukhopadhyay, Charles Wang, Jiun-Tang Huang

It is reported that Federal regulators are about to make big moves to protect children online in the New York Times article “U.S. Is Tightening Web Privacy Rule to Shield Young.” Children’s advocates claim that major corporations, app developers and data miners appear to be collecting information about the online activities of millions of young Internet users without their parents’ awareness. We study this issue by listing and discussing three questions with reference from week-6’s readings of IS205, centering privacy and consumer protection approaches.

Are the companies following deceptive practices?

In considering whether the site is being deceptive, there are two main points we need to consider – (i) the perspective of ‘reasonableness’ (ii) ‘materiality’ in terms of whether it affects consumer decision on using the product or service.

We know that McDonald’s is not asking for parental consent when it requests children to fill email IDs. The greatest point of concern here is that children are providing email IDs of their friends, which means many of the email addresses were obtained without any form of consent from the owners. A lack of parental consent is a violation of the COPPA (Children’s Online Privacy Protection Act of 1998) guideline, which requires operators of children’s web sites to obtain parental consent before they collect personal information from children under 13. It is reasonable for children to believe that what they enter on children’s website does not cause implications of their online actions. Children are using the McDonald site to share their friends emails because they believe that this is how they share their online creations with friends. They have no access to privacy statements, nor do they know what will be done with information they provide. If parents know about what McDonald is doing, they definitely do not want their children to access the McDonald website. However, due to lack of information, parents could not prevent their children from visiting the site.  Hence the second condition for deceptive practice is also fulfilled.

Given the two arguments above, it is clear that McDonald’s is being deceptive. In the FTC’s complaints against Google, we saw how FTC determined that Google’s practice was deceptive because the company publicized Gmail user’s private information on Google Buzz without user consents and not disclosing the privacy policy clearly and conspicuously. Similarly, McDonalds did not disclose the fact that they requested children’s email IDs and used that information for other purposes, which included subscribing kids to a mailing list.

When dealing with children who definitely do not understand the implications of their online actions, FTC has to step in to ensure that parental expectations of their children’s online privacy is not ignored.

Are the companies following unfair practices?

We may also argue that the steps taken by McDonald and other some companies to collect the personal information about children under thirteen, without the consent of their parents, is an unfair action. Children are most likely not able to make informed decisions on the products and services they choose.

When collects children’s email IDs, it is impossible for children to be fully aware, let alone understand the fact that email IDs are used for tracking their online footprint and could expose them to targeted marketing campaigns. They are also unaware of that the information could be accessed publicly, thereby potentially exposing them to pedophiles. For example, the photo uploaded onto could give away information about where they live, how old they are, or which school do they attend. Publicizing this type of information exposed these children and their families to the possible crime near their geo-locations. Potential for physical harms to the children is rather high.

Apart from asking the children about their own details, asked the children to give away information about their peers. This is highly unfair because children may not have the ability to comprehend the impact of breaching privacy of their social group. On top of that, again, there was no parental consent requested!

Are consents from children meaningful at all?

When parental permissions are not required for older children, it is questionable to say that their consents given over privacy disclosures are meaningful. In the FTC’s complaint against Sears Holdings Management Corp regarding Sears’ online tracking software, FTC argued that even there is a clear and comprehensive privacy disclosure, like the one Sears had, consumers could still misunderstand the extend of online activities being tracked by Sears. Therefore, FTC deemed Sears’ practice deceptive. As of today, many popular websites for children, such as, uses tracking technologies to analyze children’s browsing behaviors. In light of the Sears example, if we assume privacy statement from the children’s websites are disclosed to children, we must question how many of them will actually be able to understand the terms and conditions. Also, how many of them will understand the implications of their activities being tracked. Additionally, children are more prone to give consents easily and carelessly if there are incentives, such as money or store credit for signing up a service using their email addresses. It is troubling to think that some companies may exploit this attribute and obtain privacy information from children for deceptive purposes.


In dealing with companies that breached consumer’s privacy, such as in the case where Facebook released user’s private information to third party application, as well as in the case where Google publicized Gmail user’s information on Google Buzz without the user’s consent, FTC often ordered the companies to: (1) designate an employee to oversee the implementation of a privacy protection program; (2) conduct periodic risk assessments for privacy breach; (3) explicitly and conspicuously ask for user’s consent before using covered information for purpose other than the purpose originally stated. In the case where children are the users, FTC should scrutinize companies and ensure that children’s online privacy is protected, likely with even strictly process and procedures.

FTC and Unethical Privacy Practices

Early morning, you sit in front of your laptop to open a news website, from the moment you click on the website to the time the first news is displayed on your screen hundreds of companies would have already tracked your action and would present you with advertisements/suggestions based on your activity. In this digital age the whole process takes less than a fraction of a second.

As users, we move through our Internet experiences unaware of the churning subterranean machines powering our web pages with their cookies and pixel trackers, their tracking code and databases. What companies are doing to gather all this information is nothing but a glorified spyware activity. “What we called spyware in the past has become a standard business practice on the internet today”. Google finds itself in the middle of the same controversy, the question arises whether this act in unintentional or if companies indeed are making use of technology and sneaking into user privacy. In the past Facebook has been accused multiple times of using cookies to track users even after they log out of the service.

Never before in the history of mankind has so much private data been gathered about the people for the sole purpose of advertising and gathering statistics.

In the past, privacy used to mean security. These days, it has obtained an entirely new meaning. Modern technologies affect privacy in whole new ways by making information collection cheap, searchable, sortable, and aggregatable. The ubiquity of personal information available on the internet via social media sites like Facebook and Twitter has changed the value of such information. To better accomodate these changes, there should be mechanisms in place for people to control the authorization chain on information about themselves in concert with institutional policies.  In addition lawmakers need to define clear legal boundaries around information privacy and security.

We cannot apply moral ethics to information; however, institutions like FTC are there to enforce policies that promote greater cooperation and penalties that drive less defection.

The regulations laid out by FTC have been formulated keeping in mind, the importance of user information security and because of this it is necessary for the companies to follow them strictly before creating or modifying any of their business practices. However, most of the internet industry makes profits by providing space to e-marketers and advertisers on their website. Customizing advertisements according to the users’ browsing preferences and keywords leads to a higher probability of users looking at these advertisements. In order to know the user’s’ browsing patterns, search engines need to track their cookies and collect their data. Certain companies take advantage of the fact that tracking of this data is not visible to the user unless the user changes certain browser settings. As per the FTC regulations on Data Security, providing sensitive user information to clients is not permissible without the consent of the user. Thus, there is a trade-off between company profits and protecting user data.

However, for certain companies that do not follow business practices laid out by the FTC policies on deception and data security, profits take higher precedence than protection of user data and hence there is a gap between practices and procedures. Companies thrive on profits so it is not necessarily the company’s fault in this case but the nature of the Internet industry is such that companies are compelled to choose between profits and user data protection.

Although the FTC works hard to “prevent fraudulent, deceptive and unfair business practices in the marketplace and to provide information to help consumers spot, stop, and avoid them”, there were many incidents of data breaches ever since electronic transactions became the norm of running a business. ( Credit card information is a common goal of hackers. Because this data includes digitized personal information, and more importantly monetary information, it is highly exposed to potential attacks. Data breaches at card payment processors turn millions of consumer into victims of identity theft. Many instances of these breaches result in personal information becoming available for sale.

( Identity thieves can open a credit card, get a loan, or even get a job using your credit. There are several ways consumers can protect themselves. By monitoring bank and credit card statements consumers can track abnormal activity. Free credit reports are available to keep track of one’s credit. “Paying it Safe” section of the FTC’s “A Consumer’s Guide to E-Payments” helps individuals become aware of safe online shopping practices. ( (

Still, the root of the problem is companies that deceive consumers and prey on lack of technological understanding. In the recent years, the FTC has approved settlements for three major social media companies, such as MySpace, Facebook, and Google. This was caused by the deceptive practice of online service providers that engaged in privacy practices that deviate from their written privacy policies. By this action, the FTC conveyed a clear message regarding the significance of transparency of what companies say they do and what they actually do.  Neither Google or Facebook or any other company is under any obligation to provide detailed information about their privacy guidelines. However, making the effort to provide such information is exactly the sort of forward thinking on transparency that the FTC is encouraging.

Despite the clear FTC message that companies need to be responsible and accountable for the deviation from their written guidelines, it is questionable how the message will be perceived by companies; especially after the Google most expensive settlement. Unfortunately, looking at the current FTC settlements, companies may think that their public privacy policies are better off being vague and unclear.  Also, Congress’s reluctance to pass legislation in this arena, great potential harm if the legislation is done in the wrong way, plus the privacy area’s rapidly evolving nature coupled with the human behavior and consumer expectations makes it harder on companies to do a right thing while minimizing the risk and supporting its business.

At the same time there are dishonest companies who are intentionally causing privacy harms to people and making money off of such abuses.  While there are important policy grey areas to resolve, sometimes by bringing cases, going after willful aggressive abusers with malicious intent who caused actual privacy harms would be a much more appropriate use of the FTC’s time and limited resources.