Privacy Rule to Protect the Young

Week of 10/1 On-Call

Author: Divya Anand, Wendy Xue, Sayantan Mukhopadhyay, Charles Wang, Jiun-Tang Huang

It is reported that Federal regulators are about to make big moves to protect children online in the New York Times article “U.S. Is Tightening Web Privacy Rule to Shield Young.” Children’s advocates claim that major corporations, app developers and data miners appear to be collecting information about the online activities of millions of young Internet users without their parents’ awareness. We study this issue by listing and discussing three questions with reference from week-6’s readings of IS205, centering privacy and consumer protection approaches.

Are the companies following deceptive practices?

In considering whether the HappyMeal.com site is being deceptive, there are two main points we need to consider – (i) the perspective of ‘reasonableness’ (ii) ‘materiality’ in terms of whether it affects consumer decision on using the product or service.

We know that McDonald’s is not asking for parental consent when it requests children to fill email IDs. The greatest point of concern here is that children are providing email IDs of their friends, which means many of the email addresses were obtained without any form of consent from the owners. A lack of parental consent is a violation of the COPPA (Children’s Online Privacy Protection Act of 1998) guideline, which requires operators of children’s web sites to obtain parental consent before they collect personal information from children under 13. It is reasonable for children to believe that what they enter on children’s website does not cause implications of their online actions. Children are using the McDonald site to share their friends emails because they believe that this is how they share their online creations with friends. They have no access to privacy statements, nor do they know what will be done with information they provide. If parents know about what McDonald is doing, they definitely do not want their children to access the McDonald website. However, due to lack of information, parents could not prevent their children from visiting the site.  Hence the second condition for deceptive practice is also fulfilled.

Given the two arguments above, it is clear that McDonald’s is being deceptive. In the FTC’s complaints against Google, we saw how FTC determined that Google’s practice was deceptive because the company publicized Gmail user’s private information on Google Buzz without user consents and not disclosing the privacy policy clearly and conspicuously. Similarly, McDonalds did not disclose the fact that they requested children’s email IDs and used that information for other purposes, which included subscribing kids to a mailing list.

When dealing with children who definitely do not understand the implications of their online actions, FTC has to step in to ensure that parental expectations of their children’s online privacy is not ignored.

Are the companies following unfair practices?

We may also argue that the steps taken by McDonald and other some companies to collect the personal information about children under thirteen, without the consent of their parents, is an unfair action. Children are most likely not able to make informed decisions on the products and services they choose.

When Happymeal.com collects children’s email IDs, it is impossible for children to be fully aware, let alone understand the fact that email IDs are used for tracking their online footprint and could expose them to targeted marketing campaigns. They are also unaware of that the information could be accessed publicly, thereby potentially exposing them to pedophiles. For example, the photo uploaded onto Happymeal.com could give away information about where they live, how old they are, or which school do they attend. Publicizing this type of information exposed these children and their families to the possible crime near their geo-locations. Potential for physical harms to the children is rather high.

Apart from asking the children about their own details, Happymeal.com asked the children to give away information about their peers. This is highly unfair because children may not have the ability to comprehend the impact of breaching privacy of their social group. On top of that, again, there was no parental consent requested!

Are consents from children meaningful at all?

When parental permissions are not required for older children, it is questionable to say that their consents given over privacy disclosures are meaningful. In the FTC’s complaint against Sears Holdings Management Corp regarding Sears’ online tracking software, FTC argued that even there is a clear and comprehensive privacy disclosure, like the one Sears had, consumers could still misunderstand the extend of online activities being tracked by Sears. Therefore, FTC deemed Sears’ practice deceptive. As of today, many popular websites for children, such as Disney.go.com, uses tracking technologies to analyze children’s browsing behaviors. In light of the Sears example, if we assume privacy statement from the children’s websites are disclosed to children, we must question how many of them will actually be able to understand the terms and conditions. Also, how many of them will understand the implications of their activities being tracked. Additionally, children are more prone to give consents easily and carelessly if there are incentives, such as money or store credit for signing up a service using their email addresses. It is troubling to think that some companies may exploit this attribute and obtain privacy information from children for deceptive purposes.

Conclusion

In dealing with companies that breached consumer’s privacy, such as in the case where Facebook released user’s private information to third party application, as well as in the case where Google publicized Gmail user’s information on Google Buzz without the user’s consent, FTC often ordered the companies to: (1) designate an employee to oversee the implementation of a privacy protection program; (2) conduct periodic risk assessments for privacy breach; (3) explicitly and conspicuously ask for user’s consent before using covered information for purpose other than the purpose originally stated. In the case where children are the users, FTC should scrutinize companies and ensure that children’s online privacy is protected, likely with even strictly process and procedures.

FTC and Unethical Privacy Practices

Early morning, you sit in front of your laptop to open a news website, from the moment you click on the website to the time the first news is displayed on your screen hundreds of companies would have already tracked your action and would present you with advertisements/suggestions based on your activity. In this digital age the whole process takes less than a fraction of a second.

As users, we move through our Internet experiences unaware of the churning subterranean machines powering our web pages with their cookies and pixel trackers, their tracking code and databases. What companies are doing to gather all this information is nothing but a glorified spyware activity. “What we called spyware in the past has become a standard business practice on the internet today”. Google finds itself in the middle of the same controversy, the question arises whether this act in unintentional or if companies indeed are making use of technology and sneaking into user privacy. In the past Facebook has been accused multiple times of using cookies to track users even after they log out of the service.

Never before in the history of mankind has so much private data been gathered about the people for the sole purpose of advertising and gathering statistics.

In the past, privacy used to mean security. These days, it has obtained an entirely new meaning. Modern technologies affect privacy in whole new ways by making information collection cheap, searchable, sortable, and aggregatable. The ubiquity of personal information available on the internet via social media sites like Facebook and Twitter has changed the value of such information. To better accomodate these changes, there should be mechanisms in place for people to control the authorization chain on information about themselves in concert with institutional policies.  In addition lawmakers need to define clear legal boundaries around information privacy and security.

We cannot apply moral ethics to information; however, institutions like FTC are there to enforce policies that promote greater cooperation and penalties that drive less defection.

The regulations laid out by FTC have been formulated keeping in mind, the importance of user information security and because of this it is necessary for the companies to follow them strictly before creating or modifying any of their business practices. However, most of the internet industry makes profits by providing space to e-marketers and advertisers on their website. Customizing advertisements according to the users’ browsing preferences and keywords leads to a higher probability of users looking at these advertisements. In order to know the user’s’ browsing patterns, search engines need to track their cookies and collect their data. Certain companies take advantage of the fact that tracking of this data is not visible to the user unless the user changes certain browser settings. As per the FTC regulations on Data Security, providing sensitive user information to clients is not permissible without the consent of the user. Thus, there is a trade-off between company profits and protecting user data.

However, for certain companies that do not follow business practices laid out by the FTC policies on deception and data security, profits take higher precedence than protection of user data and hence there is a gap between practices and procedures. Companies thrive on profits so it is not necessarily the company’s fault in this case but the nature of the Internet industry is such that companies are compelled to choose between profits and user data protection.

Although the FTC works hard to “prevent fraudulent, deceptive and unfair business practices in the marketplace and to provide information to help consumers spot, stop, and avoid them”, there were many incidents of data breaches ever since electronic transactions became the norm of running a business. (http://online.wsj.com/article/SB10001424052702303816504577313411294908868.html). Credit card information is a common goal of hackers. Because this data includes digitized personal information, and more importantly monetary information, it is highly exposed to potential attacks. Data breaches at card payment processors turn millions of consumer into victims of identity theft. Many instances of these breaches result in personal information becoming available for sale.

(http://bits.blogs.nytimes.com/2011/05/03/card-data-is-stolen-and-sold/). Identity thieves can open a credit card, get a loan, or even get a job using your credit. There are several ways consumers can protect themselves. By monitoring bank and credit card statements consumers can track abnormal activity. Free credit reports are available to keep track of one’s credit. “Paying it Safe” section of the FTC’s “A Consumer’s Guide to E-Payments” helps individuals become aware of safe online shopping practices. (http://www.ftc.gov/bcp/edu/pubs/consumer/tech/tec01.shtm) (http://www.ftc.gov/bcp/edu/pubs/consumer/idtheft/idt05.shtm)

Still, the root of the problem is companies that deceive consumers and prey on lack of technological understanding. In the recent years, the FTC has approved settlements for three major social media companies, such as MySpace, Facebook, and Google. This was caused by the deceptive practice of online service providers that engaged in privacy practices that deviate from their written privacy policies. By this action, the FTC conveyed a clear message regarding the significance of transparency of what companies say they do and what they actually do.  Neither Google or Facebook or any other company is under any obligation to provide detailed information about their privacy guidelines. However, making the effort to provide such information is exactly the sort of forward thinking on transparency that the FTC is encouraging.

Despite the clear FTC message that companies need to be responsible and accountable for the deviation from their written guidelines, it is questionable how the message will be perceived by companies; especially after the Google most expensive settlement. Unfortunately, looking at the current FTC settlements, companies may think that their public privacy policies are better off being vague and unclear.  Also, Congress’s reluctance to pass legislation in this arena, great potential harm if the legislation is done in the wrong way, plus the privacy area’s rapidly evolving nature coupled with the human behavior and consumer expectations makes it harder on companies to do a right thing while minimizing the risk and supporting its business.

At the same time there are dishonest companies who are intentionally causing privacy harms to people and making money off of such abuses.  While there are important policy grey areas to resolve, sometimes by bringing cases, going after willful aggressive abusers with malicious intent who caused actual privacy harms would be a much more appropriate use of the FTC’s time and limited resources.

Unattended Consequences of OECD Privacy Protection Guidelines

The OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data was created in 1980 to prevent economic barriers to international business when dealing with personal information.  The proliferation of disparate data privacy policies among nations threatened to hinder the efficiency at which nations interact as they each try to respect their own domestic laws regarding data privacy.  More than 30 years later we are still struggling to implement these guidelines as they have shown to have various unattended consequences.

First, the intention of the guidelines to improve economic exchange can be seen with the recent passage of the Personal Data Protection Bill in Singapore.  The law “is expected to strengthen Singapore’s reputation as a business hub, especially for transactions with European countries, which require more stringent data protection measures.”  However, it also touches on the consequences of how opening up to one market can close access to others.  For if the standards are too restrictive, countries with less restrictive standards could prevent them from doing business with Singapore.

This is indeed occurring with India and its relationship with the EU.  India is pushing for data secure status because their growth in the outsourcing sector is being hampered.  Some countries, like India, do not have the same resources as those in the OECD to implement the required guidelines or to do so in a reasonable amount of time.  So the very framework created to prevent a hindrance on economic trade is creating its own barriers as countries perform a balancing act dealing with nations at different stages and abilities of compliance.  India is further complicating matters by using their leverage in ongoing Free Trade talks to gain data secure status, blocking economic exchanges unrelated to data privacy.

The Singapore Law also reflects the issue of poor self-regulation as it only proposes forming an enforcement committee.  Part four of the guidelines just encourages self-regulation.  Adding an explicit requirement for a regulating framework would increase trust among nations and provide a stronger incentive for domestic companies to protect data.  Even a minor act of negligence can result in serious consequences in this age of digital information.  This was recently shown in the UK when the Scottish Borders Council was caught violating the Security Safeguards Principle by failing to monitor a third party company it was outsourcing its data to [Link].

More dire consequences are prevalent beyond just the implementation of the guidelines to the guidelines themselves. For instance, the British Bankers’ Association has found itself at odds with the Security Safeguards Principle when trying to comply with the Participation Principle.  In doing so they have set a precedent for denying rights afforded by the guidelines.  In this case it was denying responding to data subject requests by email as they justifiably claim it is not a secure communication medium.

A more obvious example where the guidelines are setup for abuse is in the carte blanche offered to the law when specifying exceptions to the Use Limitation Principle.  We are currently seeing the effects of this since the passage of the FISA Amendments Act, which is up for renewal this year.  The Act, among other things, “allows the government to electronically eavesdrop on Americans’ phone calls and e-mails without a probable-cause warrant.”  This guideline exception allows for the violations of the Openness and Collection Limitation Principles.  Although it is understandable for the law to violate the guidelines within certain contexts, without any mention of narrowly tailoring this power, the goals of the guidelines are too easily compromised.

Perhaps it is time to revisit the guidelines with remedying these consequences in mind.

Feral Copyright Bots Strike Again

Authors: Colin MacArthur, David Greis, Scott Martin, Peter Nguyen, Deb Linton

Hot on the heals of Sunday’s Hugo Awards debacle, YouTube’s automated copyright takedown bot blocked Michelle Obama’s Democratic National Convention speech shortly after it aired on September 4, 2012.

According to a WIRED.com post, YouTube users attempting to view the speech instead were served this boilerplate copyright infringement notice:

This video contains content from WMG, SME, Associated Press (AP), UMG, Dow Jones, New York Times Digital, The Harry Fox Agency, Inc. (HFA), Warner Chappell, UMPG Publishing and EMI Music Publishing, one or more of whom have blocked it in your country on copyright grounds. Sorry about that.

In response to the growing number of DMCA takedown requests, sites like YouTube and uStream have created systems, or “bots,” that automatically identify and block copyrighted material. For the purposes of this blog post, we consider the following hypothetical question: What if Congress passed a law mandating the use of such a system by content platforms like YouTube? If the law was interpreted in the same way as it was in the cases we covered this week, would it be upheld if challenged?

The Supreme Court divides laws impacting first amendment freedoms into two categories: content-neutral and content-based. A law is content neutral if it wasn’t adopted because of “ [agreement or] disagreement with the message it conveys.” Ward v. Rock Against Racism, 491 U.S. 781, 790 (1989)  Even though a bot created by our hypothetical legislation would necessarily consider the content of videos in order to determine whether it violated copyright, it wouldn’t censor based on whether the government agreed or disagreed with that content. Thus, it is reasonable to conclude the proposed legislation would be considered “content-neutral” in the eyes of the Court. To be deemed constitutional, our hypothetical law would therefore need to satisfy the four criteria for content-neutral legislation as outlined in Universal City Studios vs. Reimerdes and Ward vs. Rock. The rest of this blog post is devoted to a discussion of how our hypothetical legislation would or would not satisfy these criteria.

Does the legislation further a legitimate government interest? And is this interest unrelated to the suppression of free speech?

Previous cases (e.g. Ward vs. Rock and Universal City Studios vs. Remierdes) have accepted that copyright protection is a legitimate government interest. The hypothetical use of a government mandated copyright bot would further the government’s interest in copyright protection. Moreover, it is likely uncontroversial to assert that this legitimate government interest is unrelated to the suppression of free speech.

Would incidental restriction of first amendment freedoms resulting from this law be no greater than is essential to further the government’s legitimate interest?

In the context of our hypothetical legislation, ‘incidental restrictions’ potentially could be measured as incidents of erroneous censorship similar to the episode outlined in the Wired Magazine article above. Therefore, to determine whether the restrictions would be “no greater than is essential,” it is reasonable to ask whether alternative means exist that would result in fewer such episodes. Preliminarily, one might think, given the task of trawling the seemingly bottomless ocean of data on the Internet, that such work is only really feasible if done by a computer. As long as incidents of erroneous censorship by bots are within a reasonable statistical margin of error, these would satisfy the Court’s rules for content-neutral restrictions.

Would the government’s purpose be achieved less effectively without our hypothetical law?

Any system which does not automatically identify copyright-violating content would reach few violators. In other words, because anything but a bot would censor only a fraction of all copyright-infringing material, an alternative method would be less effective. Thus, this hypothetical law would arguably meet this particular criterion.

First Amendment Rights in Interactive Media: Targeted Advertising and Privacy Concerns

Hulu.com, an online video provider, is currently facing a lawsuit regarding personalized advertisements based on user data. This is relevant to First Amendment rights in interactive media and government controls on information, as vague laws enacted for one purpose may have later unintended effects. One could even argue that parts of intellectual property law itself are unconstitutional as it bridges freedoms of speech, especially in today’s new media where its enforcement is performed by inflexible automated agents (a recent example is the live broadcast of the Hugo Awards being shut down by copyright infringement software mid-through the ceremonies because it showed clips of the TV nominees).

It is important to understand that the laws put in place will affect the future of technology and human activities associated with it. The Hulu lawsuit raises questions about how much users are and should be aware of the inner workings of corporations, and how much (or how little) the government should intervene on behalf of its people. Furthermore, problems arise from laws governing technology with respect to innovation; it is important to ensure that the Internet is as decentralized as possible, allowing content providers freedom of distribution, and allowing users to make educated choices about what information enters their homes.

If we strive towards an open and decentralized Internet, we need to consider what constitutes “open access”, and who is ultimately responsible for upholding it. Furthermore, if the Internet is considered fundamental to the protection and proliferation of “speakers”, does it mean that the government has a responsibility to manage the value chain from content creation, to content distribution, to content consumption? We should also consider the opinions of users, and how their choices can be better reflected. Some Hulu users may enjoy the benefits of targeted advertising, while others may prefer to keep their data anonymous or not accessible.

ACLU v. Mukasey was concerned with the possibility that vague, overbroad laws targeted at a small population (the adult entertainment industry) would have potential applicability across a much broader swath of organizations and the court sought a solution that was most effective with least restrictions. The lawsuit filed against Hulu is subject to the alternative effect: a 1988 law intended to protect a specific case (the privacy of video rental records) is now impacting a modern online content distribution system. Intervention from a prior medium affects Hulu’s ability to evolve its business model and the services it provides to its clients (both viewers and advertisers).

Specificity in law can help to avoid confusion in such cases. In ACLU v. Mukasey, we’ve seen that one of COPA’s core problems was that it was not “narrowly tailored”. We can certainly find similarly vague language in The Video Privacy Protection Act that allows it to apply to Hulu now:

“(4) the term ‘video tape service provider’ means any person, engaged in the business, in or affecting interstate or foreign commerce, or rental, sale, or delivery of prerecorded video cassette tapes or similar audio visual materials, or any person or other entity to whom a disclosure is made under subparagraph (D) or (E) of subsection (b)(2), but only with respect to the information contained in the disclosure.”

It also raises many questions about the role of the law itself. Hulu’s users technically agree to the “Terms of Service”, including the collection of behavioral data–but when was the last time we read all of the text before clicking on “Agree” to use an online service? We can’t recall. If users should be aware, how far should the government go to protect them? If the court found that wide availability of filters was sufficient in the case of Child Online Protection Act (COPA), would it suffice for Hulu to provide clear ways to opt out?

While opting out may be a less restrictive solution, the Hulu case still begs the question of whether or not a law that is decades old (almost ancient history by today’s technological standards!) applies to new technology generations later. If the answer is yes, it could potentially start a snowball of similar legal issues for other Internet companies, similar to how the potential impact of COPA was perceived by the courts. Timeframes are also important with respect to the pace of technology development and the current multi-year appeal process, as the speed of change can significantly impact assumptions. We need look no further than COPA’s 11-year lifespan: it was passed in 1998, went through years of appeals and never took effect, then was completely struck down in 2009.

Needless to say, too much regulation could easily wreak disastrous effects on innovation and product development. If companies and other legal entities are penalized for violations of laws which were written in the past, user experience could become severely limited, and the ability to develop new products or start new companies becomes more restricted. Ultimately this has an impact on the quality of interactive media as a whole–the question is when and if consumers are willing to pay the price for mandated protection.