Consumer Reporting Agencies Pay $1 Billion for violating the FCRA

The FCRA in the Human Resources Industry

As the unemployment rates remain high, companies are increasingly relying on background checks to ensure filtering of “corn from the chaff”. The consumer reporting agencies (CRAs) provide quick and easy means to provide accurate and up-to-date information about the potential employee.

The Fair Credit Reporting Act (FCRA) has, for the past forty years, employed regulatory policies that govern primarily who have access to your information and how that information might be used. It basically requires that the CRAs:

  1. Limit access to your information
  2. Give your consent before providing your information to the employer
  3. Investigate disputed information
  4. Correct/delete inaccurate information
  5. Delete outdated information
  6. Disclose your credit file and information upon request

Rise in errors by some of the biggest CRAs, the Federal Trade Commission (FTC) and the Consumer Financial Protection Bureau (CFPB) have tightened rules over the human resource industry. The past two years alone have seen LexisNexis, HireRight, Sterling Infosystems and General Information Systems reporting inaccurate candidate information and violating FCRA:

The Real World Impact – King v. General Information System

For the average citizen, what tangible ways could the FCRA violations by CRAs do to your daily life? It is instructive to look at a recent case of King v. General Information Services Inc. to see the potential problems that may arise.

When Ms. King was 19, she was in her motor vehicle with her cousin and her cousin’s friend when a policeman stopped there. Ms. King’s cousin and his friend were charged and convicted for motor vehicle theft arising from a series of car thefts from area automobile dealers. Ms. King had no knowledge of the alleged motor vehicle thefts, but because she was in the presence of the men at the time of the vehicle stop, she was charged with eleven counts of motor vehicle theft. Without the benefit of a lawyer beside her, Ms. King pled guilty to one count of criminal conspiracy engaging theft by un-lawful taking of movable property and was sentenced to three years probation.

A decade later, Ms. King applied for employment with the U.S. Postal Service. The Postal Service ordered a background check on Ms. King from GIS, who improperly disclosed the ten tossed charges. Any preparer of a background check consumer report that maintained strict procedures designed to omit outdated adverse information and assure complete and up to date information would have been aware that it was no longer appropriate to report the outdated arrest record. Nonetheless GIS disclosed it anyway.

Gezahegne v. Whole Foods California Inc., U.S. District Court for the Northern District of California

Issue: On February 7, 2014, Esayas Gezahegne sued Whole Foods in the California District Court for violating the FCRA for allegedly using invalid forms in the background verification process. The complaint also challenges the company’s application process stating that Whole Foods requested routine background checks prior to obtaining consent.

Facts: The plaintiff had submitted an online application on or before April 7, 2011, that required him to sign a form labelled “consent” to release of liability for companies receiving or providing information about the candidate.

“I hereby authorize Whole Foods Market to thoroughly investigate my references, work record, education and other matters related to my suitability for employment and, further, authorize the references I have listed to disclose to the company any and all letters, reports, and other information related to my work records, without giving me prior notice of such disclosure. In addition, I hereby release the company, my former employers and all other persons, corporations, partnerships and associations from any and all claims, demands or liabilities arising out of or in any way related to such investigation or disclosure.”

The document also contained several other questionable paragraphs such as admittance that the applicant has not knowingly withheld any information, an acknowledgment that the application for employment does not create an employment contract in addition to a statement that the applicant waives the receipt of a copy of any public record. The plaintiff was asked to sign a different and seemingly valid consent form later in the process, but that wasn’t signed until after the background process had already begun.

Damages: Gezahegnes alleges violation of the FCRA where the damages could range between $100 and $1000 per violation, punitive damages and standard remedies

Aggressive FTC Legal Enforcement

Instant Checkmate – Was liable for $525,000 when it failed to maintain reasonable procedures to ensure that those using its reports had permissible purposes for accessing them and failing to follow reasonable procedures to assure that its reports were as accurate as possible, and failing to provide FCRA-mandated “User Notices” outlining several important consumer protection it allowed users to search public records for information about anyone, including a person’s current and previous address, arrest and conviction records, and birth, marriage and divorce.

InfoTrack – Liable for $1,000,000 for failing to maintain reasonable procedures to limit the furnishing of reports to people with permissible purposes and failing to use reasonable procedures to assure maximum possible accuracy of consumer report information obtained from sex offender registry records; and failing to provide written notices to consumers of the fact that InfoTrack reported public record information to prospective employers, when that information was likely to adversely affect consumers’ ability to obtain employment.

HireRight – Was liable for $2.6 million to those candidates who were misrepresented on the consumer reports. They failed to maintain accurate records, to provide the customers with a copy of the reports and to thoroughly investigate the customer records, thereby violating the FRCA. On behalf of the FTC, the DOJ filed a complaint against HireRight requiring them to discontinue the illegal practices.

LexisNexis – Shortly after a similar litigation, LexisNexis paid $13.5 million to settle a lawsuit involving 31,000 people when their consumer reports were sold to debt collectors, violating the FCRA and their consumer rights.

Sterling Infosystems –  Has been found liable for FCRA violation and misrepresentation of the candidates’ information that led to their loss of jobs on several occasion.

Impact of FCRA violations on the HR Industry

The statistics of the human resources on the HireRight website states that 53% of the job applicants provide false information, more than 75% of the substance abusers are hired and that in 2008, the retail shrinkages due to employee theft was $15.9 billion and that negligent hires costs the industry about $40 million. Considering the above figures are not inflated an employer would definitely be tempted to obtain a holistic background check of a prospective employee for as less as $15.

What the employers fail to realize is that on several occasions the information recorded on CRA’s local databases isn’t always consistent on a state/national level. This results in several instances where the employers make wrong hiring decisions and several deserving candidates are left jobless.



Understanding the “Hot-News” Doctrine



The “hot-news” or, INS, doctrine originates from the 1918 case of International News Service v. Associated Press(“INS”). International News Service had been forbidden from using American and UK cables to transmit news from the front in France. Instead they copied the news from the AP’s wires and bribed employees of the AP to acquire the news. They then published it themselves without attributing it to the AP. As the reader will remember from Feist v. Rural, facts cannot be copyrighted. Only the specific arrangement of those facts fixed in a medium can be copyrighted. So factual news, so long as its presentation is not fixed in a given medium, cannot be copyrighted.

The Supreme Court agreed in INS that facts cannot be copyrighted. Arguing that the “history of the day”[1] cannot be copyrighted. Instead, finding for the AP, the court held that ‘hot-news’ can be misappropriated, and that INS had misappropriated the product of the AP. Thereby enjoining the International News Service from republishing news copied from the AP, “..for hours after publication by the plaintiff unless it gives express credit to the Associated Press.”[1] Exactly how much time news items remain “hot-news” was not specified by the court, leaving that to be specified individually on a case by case basis. The impetus behind the courts decision in INS was an argument against unfair competition. The AP spent resources acquiring the news, and for INS to acquire it without expending similar resources would amount to unfair competition.

The next major modification to the “hot-news” doctrine was the 1976 Copyright Act. This Act provided a two part test to determine when the Act preempts a state law claim of misappropriation.[2 §38] If a court finds that the 1976 Copyright Act preempts a state law claim of misappropriation then the “hot-news” doctrine cannot be applied, and instead the Copyright Act is controlling. If however, a court finds that the Copyright Act does not preempt a state law claim of misappropriation, then the “hot-news” doctrine may apply. This is what happened in NBA v. Motorola, where the court found that “..a properly-narrowed INS “hot-news” misappropriation claim survives preemption.”[2 §39] The court then developed its own test of what exactly “hot-news” is[3 §69] and applied it. Finding that Motorola and STATS did not misappropriate the real-time basketball information.

Hot News Doctrine and the Internet

The Internet affords quick and easy distribution of content. This has enabled content producers to reach much larger audiences at near real-time speed. Unfortunately, for content producers, these same characteristics make it easy for anyone to redistribute content, and content producers can’t do much about it. With some websites, the value of the content lies not just in quality, but also in timeliness and exclusivity.

The 2012 case of Barclays Capital Inc. v. is one of the latest examples of the battle between news producers and news aggregators. Barclays (in addition to Merrill Lynch and Morgan Stanley) is a financial services company that produces research on stocks and provides recommendations to its paying customers. The Firms go to extensive measures to produce these recommendations, as well as protect them from unauthorized access. Customers pay for this service because it gives them information that no one else has, or at least before anyone else has it. If everyone had open access to this information, it would no longer be valuable. (“Fly”) is a website that collects and publishes financial news, rumors, and other information flowing from Wall Street, via a subscription newsfeed. It describes itself as “a single source internet subscription news service…” that emphasizes quick and comprehensive access. [3] Although Barclays acknowledges the inevitability of some information leak, they claim that the Fly is one of the most systematic, unauthorized publishers. [3] This redistribution of content has resulted in major revenue loss, and significant staff and budget cuts. Barclays filed suit for injunctive relief from the Fly’s redistribution of their research and recommendations.

The district court ruled in favor of Barclays, stating that federal copyright law did not preempt their unlawful misappropriation of “hot news” claim. Using the five part “hot news” test, they found that the Fly’s use of information passed all parts needed to consider their use as misappropriation. It was ordered that the Fly could not use these time-sensitive facts until they were no longer “hot,” a period of time determined to be thirty minutes to several hours after the Firm’s release of recommendations. This was a severe blow to the Internet’s news aggregators, and thus garnered some high-profile attention from organizations, including Google, Twitter, and the Electronic Frontier Foundation. These organizations urged the court to consider the impact of the “hot news” doctrine on the First Amendment. They claimed that misuse of the doctrine could stifle this extraordinary growth of free expression. [4] The Second Circuit Court of Appeals overturned the district court’s decision, stating that the five part “hot news” test was unnecessarily applied. Upon applying the two part preemption test, the Second Circuit found that the case passed both “subject matter” and “general scope”, then holding that the Copyright Act indeed does preempt Barclays’ misappropriation claim.

Though this case was considered a victory for news aggregators, there still remains much uncertainty around the interplay between “hot news” and copyright. AP v. Meltwater is a more recent case (2013) ruling in favor of the content producers, by the district court. It has yet to be seen what the outcome of the appeals court will be. Additionally, Dow Jones has filed a “hot news” misappropriation suit (2014) against Ransquawk. [6] So despite the victory for news aggregators in Barclays v., there’s still significant interest in using the “hot news” doctrine as a tool to protect the value in time-sensitive facts on the Internet. But in a world where everyone is a content producer and/or distributor, and the lifespan of “hot news” is quickly dwindling, is it really possible for business models relying on time-sensitive and exclusive access to information to survive?

[1] INS v. AP Opinion
[2] NBA v. Motorola Opinion
[3] Barclays Capital Inc. v.
[4] EFF Deeplinks:Hot News Doctrin on Life Support
[5] Associated Press v. Meltwater

By Andrew McConachie and Matthew Valente

Technology, privacy and the NSA: is it time for the Supreme Court to rule?

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

The Fourth Amendment – US Constitution


The Fourth amendment has long been held as the citizens’ shield against the intrusion of the government into their private affairs. For so long, it has been effective in doing what it was designed to do. Over the years, political and technological changes created grey areas for Fourth Amendment law. So far, the legal system has been able to resolve some of these areas. Cases like United States v. Jones, 132 S. Ct. 945 (2012) (Jones) and Smith v. Maryland, 442 U.S. 735 (1979) (Smith) were two of the defining cases in settling Fourth Amendment related issues with regards to technology use by the government.

This law has many implications on many services that people use in everyday life: mobile phones, e-Mail, cloud storage…etc. However, recent years have seen technology evolve at a very rapid pace. Legal, privacy and technology experts have been calling for clarity on how the law can be applied to these technologies even before the NSA leaks by Edward Snowden. There have been different suggestions for framework to resolve some of this ambiguity (See Urquhart, 2010 and Digital Due Process).

As unfortunate as the NSA leaks are, they might have created an opportunity for the legal system to finally resolve some of the fourth amendment issues. Since the NSA leaks came out, there have been several suits filed against the U.S. Government violations of constitutional rights. In December 2013, two cases were ruled on at District Court levels that sent alarming signals to users, businesses, and legal experts. Both cases used similar rationales and references to reach contrasting conclusions. We explore these two cases below and discuss what might come out next.

Klayman v. Obama

The first case, Klayman v. Obama No. 13-085 the plaintiff brought suit against the NSA claiming that the bulk collection of metadata– phone numbers, time, date, and recipient of the call– is being collected and the government is receiving this data from the telecommunications companies. The District Court relied heavily on distinguishing the circumstances applied in Smith, and in with consideration of the Supreme Court’s recent decision in Jones which held that the warrantless installation of a GPS device on a person’s car was an unreasonable search.

In Smith, the Supreme Court held that the collection of metadata by installing a pen register, without a warrant, was not a search because one does not have a reasonable expectation of privacy to data that is voluntarily given to a third party. However, the District Court distinguishes this case from the present case by establishing the extent of the data collection and emphasizes the exponential evolution of cell phone use that was not considered as a factor in the Smith case. This leads the District Court to hold that the bulk metadata collection is indeed a search under the Fourth Amendment because plaintiffs have a reasonable expectation of privacy.

Furthermore the District Court argued that the plaintiff is likely to succeed on a claim that this bulk metadata collection is unreasonable because it fails of the efficacy prong of the reasonable expectation of privacy test. This test requires analysis for determining (1) “the nature of the privacy interest allegedly compromised” by the search, (2) “the character of the intrusion imposed” by the government, and (3) “the nature and immediacy of the government’s concerns and the efficacy of the [search] in meeting them.” Bd. of Educ. v. Earls, 536 U.S. 822, 830-34 (2002). The District Court supports this claim by emphasizing the lack of evidence produced by the government that the collection of this metadata actually thwarted any terrorist threat and that this data was not producing data more rapidly than conventional methods of investigation.

On January 3rd, the US government appealed.

ACLU vs. Clapper

A week after the NSA information was leaked by Snowden, the American Civil Liberties Union (ACLU) and its affiliates filed a suit against NSA director James Clapper and other government officials claiming violation of their first and fourth amendment rights by the NSA”s phone meta-data collection program. The plaintiffs were seeking declaratory and injunctive relief. On December 28th, 2013, the district court dismissed all claims.

Based on fourth amendment protection rights, ACLU’s claimed that the program invades their privacy by accessing phone records maintained by their service provider which constitutes unreasonable search and seizure. They argue that bulk analysis of this information can reveal privileged information about users (e.g. religious and political affiliations).

The court dismissed this argument because: (1) the government can only query the database with legal justification and within rigorous minimization procedures, (2) the query information only reveals information three “hops” away from the “seed”, and (3) the government cannot identify callers unless they used additional techniques.

ACLU also argued that the government can conduct the three-hop analysis of phone records without having to build its own database. The court dismissed this argument too citing that the Supreme Court has “repeatedly refused to declare that only the ‘least intrusive’ search practicable can be reasonable under the Fourth Amendment.” City of Ontario. CaL v. Quon, 130 S .. Ct. 2619,2632 (2010)

In the motion for a preliminary injunction, the plaintiffs were seeking to bar the government for collecting bulk data about them, quarantine their data held by the government, and prohibit the government from querying phone data using any identifiers associated with them.
Citing Smith, the court ruled that the data was conveyed voluntarily by the plaintiffs to a third party, Verizon, which creates and maintains those records. Thus, the plaintiffs have forfeited their right to privacy of this information. Furthermore, the court likened phone-data queries to fingerprint or DNA searches by law enforcement agencies which are not valuable without taking additional steps to identify individuals (See Maryland v. King, 133 S. Ct. 1958). The court stated that collecting massive amount of data that is unprotected by the Fourth Amendment doesn’t transform it into a search under the Fourth Amendment.

In relying on Smith, the court disagreed with ACLU’s reliance on Jones because the Supreme Court did not overrule Smith. The court viewed Smith as precedence in this case because: (1) even though people’s relationships with their phones have evolved, their relationships with their service providers have not, (2) the case here is concerned with the use of phones as only telephones, and (3) the type of information captured by the program has not changed. Thus, the court concluded that the NSA program does not violate the Fourth Amendment.

On January 2nd, ACLU appealed in the Second Circuit Court of Appeals.

Implications: Confusion!

The contrasting opinions in the two cases have created confusion among the public. On one hand, Klayman was ruled in favor of the plaintiffs against the government while Clapper was in favor of the government.

The first decision ruled that a government run program is “Orwellian” and probably “illegal”. What will government do now? Can the information acquired through this program be used as evidence in courts against citizens? Should the government stop the program?

The 2nd decision throws into the question the privacy of all users’ data held by third parties. This would apply to data from phone records to files stored in the cloud. The confusion about user privacy is also of great concern to service providers who could be treading on fine lines when dealing with government requests for access user data. How should service providers respond to any future requests for access? The Snowden leaks show that NSA programs didn’t stop at phone data. How would the courts rule against such programs?

Will the Supreme Court Intervene?

Since the Supreme Court generally only reviews 70-100 cases a year with about 10,000 appeals requesting review, it is important to consider the factors that would escalate the issue in the present case to be important enough for the Supreme Court to grant certiorari (agree to hear the case). According to Diana Hess and Louis Ganzler (1996/2006/2011), the Supreme Court is likely to grant in cases which 1) there is a split between different Circuit Courts; 2) are of great importance in regard to societal impact, substantial legal significance, and requires clarification; 3) areas of interest for the individual justices; and 4) egregious legal errors in the District Courts.
Since the issue has not reached the Circuit Courts yet, there is no Circuit split; however, if the decisions reached by the two Circuit Courts contradict, it is more likely that the Supreme Court will hear the case. Moreover, the revelations of NSA’s bulk metadata collection porgram has been elevated to a greater level of public concern because of the implication of potentially violating civil rights and thus more likely to be reviewed by the Supreme Court. Furthermore, since the decision in Klayman, distinguishes the scope of the Supreme Court’s decision in the present case, it may be considered an egregious error by the District Court in that it limited the Supreme Court’s decision.


As the District Court in Klayman illustrated, there are many other District Courts that have ruled and applied these similar issues with the same cases and ending up with a different result. Due to the large amount of civil rights implications, District Courts arriving at different decisions from the same standards and the grave important of the issue in terms of societal impact and legal significance, it is very likely the Supreme Court will grant review and decide these substantive issues. Moreover, the Supreme Court should clarify the legal parameters of the Fourth Amendment implications to determine whether or not this bulk collection of metadata is legal. If not, the debate over the NSA’s bulk collection is resolved. If so, however, the political institutions will be tasked with determining whether these policies are actually worth the potential civil rights implications. This important political debate, although started like debate on the Affordable Care Act, has not taken full fruition and the full weight of the ramifications will not be considered until the Supreme Court decides the substantive legal issue present in the NSA’s bulk collection of metadata.



By Hassan Jannah & Christos Christodoulou

NSA Metadata Collection and the 4th Amendment



The advent of the 21st century brought with it various noticeable sociological changes. The most profound change was the increased role of technology in our daily lives, especially our heightened dependence on the internet. All this continual progress points toward the internet’s inclusion in our most private environments; our families, houses, cars, and phones. This inclusion, aside from making our lives comfortable and more efficient, has become a peephole for big brother to carry out historically unparalleled surveillance.

The controversial addition of section 215 to the Foreign Intelligence Surveillance Act that authorized the Foreign Intelligence Surveillance Court (FISC) to issue orders directing individuals to hand over “tangible things including books, records, papers, documents, and other items” if the FISC was to find that the government had “reasonable grounds to believe that the tangible objects sought are relevant” to an investigation related to protecting the United States against “international terrorism”, has been the subject of much debate ever since Edward Snowden, a National Security Agency (NSA) contractor, decided to blow the whistle on the mass data collection activities of the NSA.

The NSA ‘telephony metadata’ program was sanctioned via a secret order of the FISC, and requires Verizon to produce on an “ongoing daily basis … all call detail records or ‘telephony metadata’ created by Verizon for communications (i) between the United States and abroad; or (ii) wholly within the United States, including local telephone calls.”
The disclosure of this program spurred a nationwide debate on the modern day implications of the fourth amendment, and will possibly mould a context for what we will come to understand as privacy in the next decade.

The assurance from President Obama that no content is recorded under the program notwithstanding, it is important to understand the reason why this mass collection of metadata should be concerning to Americans. The intuitive notion that metadata is somewhat less sensitive than content stems from our misunderstanding and naiveté about metadata. In the current state of technology, it is easier for governments to extract associations and patterns from metadata whose processing can be automated, as opposed to voice data, whose processing still requires some human analysis for accuracy. Analyzing voice data, in fact, takes exponentially more time than analyzing billions of rows of transactional data. The metadata that the NSA has access to can potentially be used for extrajudicial or political ends that are irrelevant to national security.

Our analysis below does not offer an objective conclusion to the question of legality of the NSA program. Instead it represents an effort to provide a sound basis to the reader for understanding the influences of such disclosures on the evolution of modern day perceptions of privacy in the context of the fourth amendment.

Legal Issues

The first determination that must be made for fourth amendment coverage is if a particular government interaction with a citizen constitutes a ‘search’. If it does, then the next determination to be made is to analyze if the search is ‘reasonable’.

A government ‘search’ will be found if there is a violation of the “right of the people to be secure in their persons, houses, papers, and effects…”, which is a reflection of the 18th century common-law trespass.

In Katz v. United States, Justice Harlan concluded that “the Fourth Amendment protects people, not places,” and found a violation in attachment of an eavesdropping device to a public telephone booth. This gave rise to the Katz test which posits that a violation occurs when government officers violate a person’s “reasonable expectation of privacy.”

In the case of NSA, it is necessary to determine whether the FISC order to telephone companies asking them to handover the call details of their customers to the NSA constitutes a violation of the fourth amendment by taking into account past and current jurisprudence.

Smith v Maryland (1979)

While stating their opinion on the NSA case, Judge Richard J. Leon from the U.S. District Court for the District of Columbia and William H. Pauley III from the U.S. District Court for the Southern District of New York posed opposing opinions on the case and the fourth amendment.

Both of them based their opinion, particularly Pauley, on the 1979 case of Smith v Maryland. In Smith v Maryland, the Supreme Court stated that the installation and subsequent use of a pen register (device used to record the number called from a specific landline) does not constitute a search therefore it does not require a warrant. The argument is that since the pen register is installed in the phone company offices and not in the subjects property, there is no violation of “legitimate expectation of privacy”. According to Smith, by dialing a phone number a person is actually sending that information (the number itself) to the phone company, then by definition that information is no longer private.

In Judge Leon opinion, Smith cannot be used in the context in which the NSA is conducting its metadata collection, even though it seems that the NSA is getting it directly from the phone companies (Verizon in this case).

“Because the Government can use daily metadata collection to engage in ‘repetitive surreptitious surveillance of a citizen’s private goings on, the program implicates the Fourth Amendment each time a government official monitors it… T]he almost-Orwellian technology that enables the Government to store and analyze the phone metadata of every telephone user in the United States is unlike anything that could have been conceived in 1979,”
– Judge Richard J. Leon, of the US District Court for DC

On the other side Judge Pauley, argues that:

“Telephone users … typically know that they must convey numerical information to the telephone company; that the telephone company has facilities for recording this information; and that the telephone company does in fact record this information for a variety of legitimate business purposes.” Thus, Pauley wrote, when a person voluntarily gives information to a third party, “he forfeits his right to privacy in the information.”
– Judge William H. Pauley III, of the US District Court for
the Southern District of New York,

US v Jones

In US v Jones, Judge Scalia in delivering the opinion of the court made relevant observations regarding the time period of surveillance activities, notwithstanding their justification – “What of a 2-day monitoring of a suspected purveyor of stolen electronics? Or of a 6-month monitoring of a suspected terrorist? We may have to grapple with these “vexing problems” in some future case where a classic trespassory search is not involved and resort must be had to Katz analysis; but there is no reason for rushing forward to resolve them here.” The relevance in our case is that the court allows the NSA to store the records for a time period of five years. It is hardly perceivable though that the government will just delete the data after investing billions of dollars in data centers to store and analyze it.

The other relevant outcome of the case is the repeated emphasis that the Katz test adds to the common law trespassory test, and does not repudiate it, while evaluating fourth amendment violations. The question therefore then becomes: Is there a relevant expectation of privacy for the metadata associated with our phone calls that includes call location, duration, numbers of both parties, time and date of the call, and other unique identifiers notwithstanding Pauley’s analysis of Smith v Maryland?

And if there is a reasonable expectation of privacy, and the NSA’s collection activity constitutes a ‘search’, can the government prove the reasonableness of this search, which includes storing the telephone data of every citizen for a minimum of five years?

In any event, and whichever analysis you deem to be more sound, it is quite certain that the disclosure of the NSA surveillance programs have opened up questions about privacy that must be answered before we are overtaken by technology yet again, and are left to be found wanting because of a system of law that is too slow and inefficient to safeguard the interests of the very people it strives to protect from injustices.


The actions taken by the NSA with the specific purpose of obtaining information that would enable the detection of potential threats within the population have deep legal and social consequences that will determine new rules for government overreaching under the banner of security. However, the line between its legality and constitutionality is not clearly drawn in the sand. It is exploring new grounds developed by the technological progress made in recent years since it is clear that the law has not been able to keep up with the frenetic pace with which technology operates. It is important to never lose sight that, besides any personal opinions on the matter, laws are still the rules by which our society functions and preclude our government. Whether the NSA metadata collection is unconstitutional or not is still up for debate. Our legal system will have to deal with it. That system is and will continue to play catch up with technology, but that doesn’t mean that informed opinions have to be exclusive to the courts.


By Ramit Malhotra and Pablo Arvizu

Not Open to the Public: United States v. Weev




The CFAA, the federal government’s key anti-hacking law, was originally enacted in 1986 to deter hackers from wrongfully obtaining confidential governmental and financial information, or inflicting “federal interest” computers with harmful viruses. It was passed to regulate only those computer crimes that were interstate in nature, particularly those involving large financial institutions and governmental organizations but the statute was amended several times to broaden the CFAA’s reach. A case which attracted attention of many advocacy groups for ambiguity in CFAA section 1030 recently had an interesting outcome. Andrew “Weev” Auernheimer, a hacker, who was accused of stealing the personal data of over 100,000 iPad users from AT&T website in 2010, is about to be free with his sentence overturned by a federal appeals court.


This is what happened; he and co-defendant, Spitler found a security loophole on AT&T website. Earlier, iPad owners could sign up for Internet access using AT&T. In the sign-up process, AT&T took these iPad users’ email addresses. To make user log-in process easier, AT&T designed this system; when an iPad owner would visit the AT&T website, the browser would automatically go to a specific URL associated with its own ID number; when that URL was visited, the webserver would open a pop-up window that was preloaded with the e-mail address associated with that iPad. Weev and Spitler found out about this sytem and collected lots of email addresses which were associated with particular iPad identification numbers. They reported this to the Gawker website with collected data as an evidence; it is said that security researchers often use this approach to warn the public about security vulnerability. In 2012, he was tried and sentenced to 41 months in prison for identity fraud under 18 U.S.C. section 1028(a)(7) and conspiracy to gain unauthorized access to computers under section 1030, while Spitler received probation by pleading guilty to conspiracy to gain unauthorized access to computers and identity theft. Luckily, Auernheimer had powerful legal support by a law professor representing him pro-bono in his appeal, and the federal appeals court reversed the conviction.

This case has raised important questions on CFAA during the trial and appeal. According to Kerr (2013), it was going to be an influential precedent on the meaning of “unauthorized access”. In addition to it, the case revealed complication with jurisdiction of such cases; like most hackers’ cyber crime, Auernheimer, Spitler, AT&T servers, and victims were all in different locations.

Unauthorized access under section 1030

Like the other cases of this week’s reading, United States v. Auernheimer is an issue of CFAA violation and unauthorized access. The government claims that Spilter’s program tricked and deceived the AT&T computer into giving up information —implicitly rendering the access unauthorized. However, unlike the examples of Nosal and Swartz, the AT&T server links accessed by weev and co were not secure and the company appeared not have gone to any lengths to prevent unauthorized access. Weev and his co-conspirator simply accessed what was public, bringing information hidden by obscurity, into the public eye, and did not misuse their credentials or circumvent any barriers to entry. First Circuit in EF Cultural Travel BV v. Zefer Corp held that use of the scraper was “authorized” under § 1030 even though the company would have disliked it. Following USA v Nosal, “excess of authorization” does not apply to violating restrictions on how information is used within an accessed controlled setting. So although weev misused the information that he came across, in our opinion, it is possible to recognize his conduct as “authorized access”; he had permission to access the URLs in the first place.

In Nosal case, the database from which Nosal’s conspirators stole information was protected by password. On the other hand, Weev just accessed a publicly available URLs which doesn’t even require password as we mentioned earlier. This raises serious concern on ambiguity in the definition of “unauthorized access.” However, a footnote in the ruling provides to be guilty as per New Jersey law, Auernheimer or Spitler would have had to breach a code or password based barrier to gain access. This may  influence the interpretation of corresponding terms in CFAA section 1030.


In this particular example of CFAA violations, the venue of illegal activity became an important feature, given that the victims were in New Jersey, the defendants in California and Arkansas, and the AT&T servers in Texas. In particular, this seems to draw attention away from the technology at hand, which is interstate by nature, focusing scrutiny on location of where the unauthorized access might have occurred. Geography is simpler in a case like Swartz’s, with only one person involved, and where the perpetrator and the victim were in the same physical location. This raises questions on how location should be decided in these interstate cases. In his analysis, Orin Kerr notes that prosecution may be pursued in any of the states in which the violation occurred; in weev’s example, however, only the location of New Jersey was relevant only to the victims and the actual unauthorized access had actually occurred elsewhere. Given the interstate nature of the internet and associated transactions, this issue will likely occur in future cases.


Weev’s trial continues to raise questions about how the CFAA applies to non criminals, such as ordinary citizen and security researchers. Previously, United States v. Lori Drew discussed how the average citizen may often violate standards of unauthorized access or excess of authorization in day to day life, simply by providing false information in an online profile, or when checking a family member’s email for them. Although the decision was reached so as to avoid prosecuting the ordinary citizen for not adhering to a site’s TOS, weev’s trial again draws attention to the fact that one citizen may face significant penalties without realizing the severity of their crime. Would Aaron Swartz, who already had personal access to the JSTOR articles he pirated, have repeated the same actions had he known his sentence? This is especially relevant for security researchers or the occasional ethical hacker, who may violate authorized access while searching for loopholes and weak points. In some instances, unauthorized access may occur as a part of reverse engineering. The CFAA currently seems to focus on the nature of the violation and little on the intent.



Image source:


– Noriko Misra, Kristine Yoshihara & Dheera Tallapragada


Spilling The Beans – Trade Secrets and the Industry

Taipei, December 2013

HTC Employees Arrested For Leaking Trade Secrets

Six employees of HTC Corp. were indicted for revealing trade secrets, falsifying expenses and accepting kickbacks from suppliers. Prominent among them was Thomas Chien, VP of Design for HTC, who leaked information about an unlaunched HTC smartphone UI design to a group of individuals in Beijing with whom he was planning to start smartphone design companies in Taiwan and China, rumored to be in cooperation with partners backed by the Chinese government.

The other 5 employees who were accomplices to the wrongdoing were identified as HTC research and development director Wu Chien-hung, HTC senior manager of design and innovation Huang Kuo-ching, senior manager of design and innovation Huang Hung-yi, manufacturing design department manager Hung Chung-yi, and employee Chen Shih-tsou.

A complaint was filed against Chien and two of the above employees with Taiwan’s Bureau of Investigation. Some of the employees got off with settlements, after they showed remorse and were forgiven by HTC. However, Chien reportedly showed no remorse and was turning to all sorts of arguments to try and get away. He was caught when the R&D Center of HTC was raided and he was found to be illegally downloading confidential content.

Which are the local laws coming into play here?

Interestingly, the 3 prime accused, including Chien were held under Article 13 of Taiwan’s Trade Secrets Act which had been revised just earlier that year. The article bans stealing or unauthorized reproduction, revelation and use of key corporate business secrets.

Does the information leaked constitute trade secrets, and how?

If the information is a part of an initiative towards which company invested resources, directed research, is considered by the company as confidential and important in maintaining its competitive position in the industry and for which the company proactively limits access to people who need to have it, as well as requires employees to sign non-disclosure agreements to protect such proprietary information and intelllectual property, then such information qualifies as trade secrets.

If the information is known industry standards or best practices, information that employees learnt and/or skill they acquired over the routine of their everyday work, which was not specifically expected from employer and the employer did not invest resources and supervision into it, that would not qualify as trade secret, unless a confidential relationship exists between employer and employee in which secrecy of the information is implied. Information that is publically disclosed also does not qualify.

In this case, the UI design and specs of a yet-to-be-released phone definitely would qualify as trade secrets as evident from the criteria outlined above. HTC’s statement reveals that they considered this information proprietary and held the event a breach of integrity, ethics and respect for privacy and security.

Implications of the breach for accused

Under Taiwanese law, the breach is punishable by imprisonment of upto 10 years and a fine between $0.1 mn and $1.6 mn. For monetary gain above $1.6 mn by the persons held, the fine is between 2 and 10 times the amount of gain.

What would the accused have to prove to get off the hook?

They would have to prove any one or more of the following:

1) That the information they revealed did not constitute trade secrets.

2) That they were not bound by any non-disclosure, confidentiality or secrecy agreement.

3) That they did not indulge in misappropriation of any information in the first place

Implications of the law on technology and society

When judging cases of this sort, responsibility needs to be maintained, considering the implications of the judgement on society and industry. Laws related to trade secrets should be formulated such that they neither unfairly and unreasonably favor employer nor employee. They should be sufficient enough to protect companies and at the same time, not so restrictive as to hamper employee mobility or to render the very strengths of employees their weakness. Disambiguating between trade secrets and skills and dexterity acquired by employees on the job is extremely significant and material to the judgement in such cases and should be treated with care.The Inevitable Disclosure Doctrine, while neither accepted nor rejected by courts, could be controversial, inadequate and erroneous in making judgements.


Data Security and the Target Breach


Last December, Target announced a massive data breach involving millions of credit cards, email addresses, mailing addresses, phone numbers, and customer names. FTC is currently investigating the data breach.

Please see the video.

This is one of the largest retail security breaches in history, with current estimates indicating up to 110M customers affected, and massive effects on Target. As a result of the breach, Target’s Chief Information Officer has resigned, and their holiday profits declined 46% year over year.

A brief timeline of events is below. (source:$File/Rockefeller%20report%20on%20Target%20breach.pdf):


Data Security Failures and Opportunities for Target to stop the attack

The intruders got access to the Target’s network by getting access to Fazio Mechanical Services, an HVAC and refrigeration company. This firm had remote access to Target’s networks for specific purposes – electronic billing, contract submission and project management purposes. Similar to In the Matter of CardSystems, Target (1) did not use readily available security measures to limit access between computers on its network and between such computers and the Internet; and (2) failed to employ sufficient measures to detect unauthorized access to personal information or to conduct security investigations. This attack vector highlights the importance of ensuring that partner organizations that access a company’s data should follow the same security policies and procedures (as mentioned by the California Office of Privacy Protection’s recommendations under the security breach notice law). It is also evident that Target did not require partner organizations to follow security policies and procedures.

One report suggests that default account names were used to gain access to the Target network from Fazio’s network. Target could have set up policies and guidelines for the IT systems within the network to prevent the breach.

Target also did not segregate critical network and data infrastructure. Customer’s personal data, which is individually identifiable could have been kept isolated from vendor’s billing access systems. There doesn’t seem to be any classification of information based on sensitivity of data. This is one of the safeguards suggested by  California Office of Privacy Protection’s Recommended Practices under the Security Breach Notice law

Technological Measures Aren’t Enough

The California Office of Privacy Protection’s Recommended Practices under the Security Breach Notice law recommend to conduct periodic penetration tests and reviews to ensure privacy and security are preserved.  But in this case, Target repeatedly ignored warnings from various systems about the malware.  The FTC Statement on Data Security notes that businesses need to protect against well-known, common technology threats, target and be sure that they can back up any claims about data security. In Target’s case, they succeed on some fronts, yet completely fail in others. On the one hand, six months prior to the attack, Target had installed a malware detection tool FireEye that caught the intrusion and flagged Target’s security operations center. However, Target failed to act on these warnings, allowing the attackers to install malware and extract customer financial and personal information. It is apparent that although Target had taken technological measures for data security, they did not have adequate procedures, policies, or employee training in place to address potential alerts.

Data retention is one area where Target appears to have done relatively well, as the financial information leaked was limited to “only” a few weeks’ worth of credit card usage (estimated 70M individuals as per latest reports). Had financial information been stored for a longer duration, the damage may have been even greater.

Subsequent Legislation and FTC Investigation

Target’s first notification to consumers occurred when they publicly admitted to the breach on December 19, 2013, only after a security researcher broke the news to the media first. Following the breach, along with other breaches like at Neiman Marcus, Senators Rockefeller, Pryor, Feinstein, and Nelson introduced legislation that creates tighter requirements for security implementations, customer notification, and increased FTC enforcement authority.

The FTC has announced that they are currently investigating the situation, and may ultimately find that Target behaved in an unfair or deceptive manner by not taking appropriate safeguards for their customers’ personal and financial information. The FTC may find Target to have failed to adequately “protect against well-known, common technology threats,” given the apparent lack of procedures for handling data breach alerts.

Sources and Further Reading

Business Week, Missed Alarms and 40 Million Stolen Credit Card Numbers: How Target Blew It

FTC, Prepared Statement on Data Breach  on the Rise

Senate Committee on Commerce, Science and Transportation, A “Kill Chain” Analysis of the 2013 Target Data Breach

Diane Feinstein, Senators Introduce Bill to Protect Against Data Breaches

–  By Divya Menghani and Dan Tsai

Liability for Defective Information for Dummies

The modern digital economy is driven by the production and consumption of information.

Every day, individuals and organizations use informational content made available through a variety of “publishers” to perform tasks.  Publishers of information make information content available to society and include publishers of books, magazines, mobile applications, television, websites, podcasts and the lists goes on and on. The availability and access to this information has transformed society.

However, what happens when these publishers communicate defective information that leads to injury? Are these publishers somehow responsibility for liability of damages because they made it possible for the defective information to be used by the injured party?

Before we address how to think about liability for defective information, let’s start with an easier analogy by reviewing one form of liability when applied to defective products.

Sellers and Strict Product Liability

Products are created by manufacturers and made available to the public through sellers. Sellers could be distributors, suppliers and retailers that all contribute to the making the product available to the public.

If a product defect leads to physical harm to persons or property caused by the product defect, both the manufacturers and the sellers could be held “strictly liable” for the damages. “Rather than focus on the behavior of the manufacturer (as in negligence), strict liability claims focus on the product itself. Under strict liability, the manufacturer is liable if the product is defective, even if the manufacturer was not negligent in making that product defective.”[1]

If manufacturers create physical products, then in the information world, authors create information content. Similarly, if sellers make goods available to the public, publishers make information goods available by fixing the information content in a medium such as book, magazine or accessible web-page.

Hypothetical Examples of Publishers Making Available Defective Information

Let’s draw parallels from how products can be defective to how information can be considered defective with some hypothetical examples.

Defects by omission: Imagine an online travel guide that provides informational content describing various locations and subjects with the intent to help the general public make choices regarding tourist travel desintations. One of the guides showcases the Hawaiian Islands. A person using the travel guide decides to visit Cakeha beach and partake in normal beach activities such as surfing. The travel guide did not have information detailing known dangerous ocean conditions at a Cakeha beach. In the course of surfing, the person receives injuries on the beach when trying to surf.

Defects by Incorrectness: Imagine there is a DVD-based science encyclopedia that contains articles written by independent users.  One of the authored articles showcases different types of mushrooms and details facts about whether the mushrooms are poisonous or non-poisonous. However, there is a factual error in one of the mushroom articles. A person using that information eats a mushroom portrayed in the book as safe to eat, but then gets sick and incurs bodily injury.

Defects by Intentional False Portrayal: Imagine you have an online community with the capability to showcase personal information in online profiles. Now a male individual creates a fake online profile with details of a previously engaged female partner. The online profile has fabricated information and address information of the female individual. Before long, men whom the female individual did not know were peppering her office with emails, phone calls, and personal visits, all in the expectation of sex.

Imagine a World with Strict Liability for Information Content

There is good reason for why the courts have not classified defective information within the same liability constructs as physical products.

Imagine a world in which product liability applied to information. Since sellers of products can be held liable for defective products, publishers of defective information content created by authors would also be held liable. Therefore, publishers would have to incur the extra burden of verifying the information content of the authors lest risk being dragged into a lawsuit along with the authors. Such a world would most likely reduce the amount of published materials available to the public and reduce societal benefit.

Extend the notion of a publisher to every internet service that can presumably “host” content created by authors. Holding strict liability for information content would destroy the very notion of the internet as a vehicle for distribution information content.

Publishers Generally not held Strictly Liable for Defective Information

The courts in a wide variety of cases have consistently ruled favoring that publishers of content are generally not held strictly liable for defective information.

The three examples actually stem from cases where courts have ruled in favor of publishers.

  • Omission: Birmingham v Fodor. Instead of an online travel guide, the travel guide was a physical book. The court ruled that “a publisher of a work of general circulation, that neither authors nor expressly guarantees the contents of its publication, has no duty to warn the reading public of the accuracy of the contents of its publication” [2]
  • Factual Incorrectness:  Winter v. Putnam. Instead of a DVD encyclopedia, the content was contained in a book. The court stated that “we conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden, but there is nothing inherent in the role of publisher . . . to suggest that such a duty should be imposed.”[3]
  • False Portrayal: Barnes vs. Yahoo. The online community is Yahoo’s member directory. The court held that according to section 230(c)(1) of the Communications Decency Act “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This provision further reduces the liability of defective content for internet provides.[4]

Liability of defective information: A decision flow to information service providers

“So, do we liable when our content go wrong and someone gets hurt by it?” One may ask as a service provider. Here are some basic principles you can apply to determine if any possible legal responsibility you will be charged:

Q1: Who is the creator of your content? A service user or an employee from your company?

This is the first question you need to decide. In the early stage of Internet development in last century, the contents of websites were mostly created by service providers. They were acted as “authors” to create meaningful information for users. However, nowadays when the social platforms are booming, more and more contents are now generated by their users. The website service providers are now acted as “publishers” to expedite information to every users connect to their services. When users get hurts by the defective information provided by service providers, the role of service providers as “authors” or a “publishers” determines whether they have the liability.

If you are not the content creator and only act as the publisher, then you are protected by Cardozo v. True , 342 So. 2d 1053 and not liable to any harm as it described:

“The ideas hold a privileged position in our society. They are not equivalent to commercial products. Those who are in the business of distributing the ideas  of other people perform a unique and essential function. To hold those who perform this essential function liable… restrict the flow of the ideas they distribute”

Some lawsuits belong to this category and have been dismissed by court:

– Birmingham v. Fodor’s Travel Publications, Inc. [2]

– Beckman v. [6]

Q2: Is the information foreseeably attributed to harm under expected usage?

If you are the “author” of defective information due to the defective contents are generated by employee actions or contributions, then this will be the next question to examine your liability. And if the answer is “No”, you won’t liable for the defective information.

Take Rosenberg v. Harwood (and Google) as example, Mrs. Rosenberg alleges that Google Map service negligently provided her walking directions that caused her seriously injured by car accident.  The court dismissed the claims against Google based on two reasons: 1) As an information service provider, Google has no duty for unexpected usages conducted by it users and 2) there is no foreseeability of harm caused by third party.

It is not reasonable for courts to force you to verify every potential usage and following consequence by end-users, and such a decision would create unjustifiable burden on your service.

Some similar cases can be found as listed below:

Gibson v. Craigslist [7]: Craigslist sued for liability related to guns posted for sale on site

Barnes v. Yahoo [8]: Yahoo sued for allowing posting of defaming information from ex-boy friend

Harris v. Google [9]: Google sued for incorrectly listing personal address of a business owner thereby preventing customers from finding address of the business

Q3: As a service provider, do you have the duty to warrant the information under correct usage?

If your answer to the second question is yes, here is the third examination. If you present information in a “professional or expert” fashion whereby the reasonably perceived usage of your defective information leads to directly attributed harm, you are liable to the service you provided. Aetna v. Jeppesen provided a clear example when the erroneous scales of charts lead to aircraft accident.  The charts are served as product that the contained information should be warranted by the service provider under correct usage. Information in this category should be treated as normal product that providers have duty to guarantee the quality of the information.


 By Derek Kuo and Christopher Fan

“The biggest lie on the web”

Is that text big enough for you?

Is that text big enough for you?


Terms of service agreements are in need of a redesign. In all our literate years (50 between us) we have never read an entire terms of service agreement. They are long, dry, uninteresting and — usually — formatted in ways that are uncomfortable to read. Furthermore, the more controversial, and therefore more interesting, terms can be hidden deep in the text, as in this example of the Gmail terms cited by Raul et al, which one MUST agree to before being permitted to use the product: “On the 193rd line, the terms note that Google may ‘‘reproduce, adapt, modify, translate, publish, publicly perform, publicly display, and distribute any content which [the user] submit[s],’’ with a ‘‘perpetual, irrevocable, worldwide [and] royalty-free’’ license.” We can see how conspicuously placed terms are one major problem with current TOS design.

Real users don’t read the terms

Raul et al, with a palpable mix of terror and outrage, write of the Sears case that “this proposed settlement would first and foremost set a precedent that material is ineffectively disclosed if it is merely included in a standard length privacy policy.” This may be a radical realignment in regulatory terms, but to the end user it just sounds like common sense. The idea that any lay person actually reads the legalese in website privacy policies or software terms of service is commonly regarded as a joke.

Whether or not you have read an agreement before expressly consenting to it is almost always irrelevant in the eyes of the law. It may even be essentially impossible to read the terms of a software purchase before making the purchase. Take the the “shrinkrwap” licenses discussed in the ProCD case, where the court says “notice on the outside, terms on the inside, and a right to return the software for a refund if the terms are unacceptable … may be a means of doing business valuable to buyers and sellers alike.”

The court approvingly cites Allan Farnsworth’s assertion that the standardized contracts that enable such arrangements “are essential to a system of mass production and distribution. Scarce and costly time and skill can be devoted to a class of transactions rather than the details of individual transactions.” The savings of “skilled time” is due to the expectation that nobody is actually going to read these contracts. Rather, consent is based mostly on the end user’s level of trust and assumptions that surely some informed lawyers have made sure the details are reasonably fair and acceptable.

The result is that the terms of a software license can be considered “disclosed” as long as they are included in a document that is never intended to be read. This may be legally sound, but in today’s data landscape it seems increasingly out of touch with reality.

Raul et al point out that the courts may intervene to invalidate the terms of unread, standardized contracts “where there is a disparity of relative bargaining power between the parties,” and the stronger party takes advantage of the weaker party’s ignorance to slip in some onerous provisions. Critics of the notice-and-consent framework online might argue that web users are faced with just such a power imbalance, one so extreme that it renders the notice-and-consent framework insufficient to protect them. In a blog post praising the Sears settlement, the Center for Democracy and Technology concludes that “the FTC has said that consumers are harmed by privacy invasions in and of themselves. Companies have no right to surreptitiously spy on consumers – even if they are willing to pay consumers for the privilege.”

The UX of TOS

Source: ToS;DR

The CDT warns that far more work needs to be done to increase the fairness of privacy disclosures and other ToS provisions. One problem that we have recognized is that the the “standardization” of these contracts refers only to the verbose and specific legal language that is carefully calculated to protect the service provider against lawsuits. There is a lack of accompanying design standards for ToS pages to promote clarity, readability, and ease of identifying key provisions. Indeed, the typical user interface design of ToS pages that we are all familiar with — tiny text, cramped reading windows, pagination that emphasizes how massively long the documents are —  suggests that ToS pages are designed not to be read.

In the Sears case, as the FTC’s David Vladeck told The New York Times in 2009, regulators came to the same conclusion — and they decided to do something about it. “Disclosures are now written by lawyers,” Vladeck said. “I don’t think they’re written principally to communicate information… in the face of these kinds of quote disclosures, I’m not sure that consent really reflects a volitional, knowing act.”

One interesting solution to this design deficiency is the activist Terms of Service; Didn’t Read project, which offers this mission statement: “‘I have read and agree to the Terms’ is the biggest lie on the web. We aim to fix that.”

ToS;DR Project Lead Hugo Roy says the system is “deeply broken” because the policy documents are excessively long, hard to read, and incomprehensibly complex because of all the third-party relationships they implicate. On top of that, he adds, they are often subject to frequent change without notice.

The solution advanced by ToS;DR puts a premium on at-a-glance digestibility, since the typical user experience on a ToS page is to quickly click through to the next page anyway. ToS;DR boils the license agreements and privacy policies of popular web services down to a handful of salient bullet points and assigns each point and the overall service a fairness score, highlighting positive and negative ratings with colorful “thumbs up” and “thumbs down” icons. These short summaries and colored icons support clear communication of the positive and negative terms that are most relevant to end users.




Herein we have laid forth issues that deter consumers from reading and understanding the legal contracts that they are entering into upon purchase or installation of products or software. The increasing connectedness of the modern world will only increase the complexity of service terms, and new innovations will continue to create novel threats to data privacy. The FTC owes the public long-overdue overhaul of ToS formatting and design guidelines. They should take a page out the the FDA’s playbook, who recently invested in a redesign of nutrition labels on products, a change that values an informed consumer.

— by Ian MacFarland and Shaun Giudici