Facebook adds two privacy tools…the latest

Most recently in the heated privacy controversy that surrounds Facebook, Information Week reports that two new features have been added to improve privacy.  Both tools are focused on preventing unknown machines from logging on to your facebook account.

See here: http://www.informationweek.com/news/storage/data_protection/showArticle.jhtml?articleID=224800027&subSection=Privacy

However this seems to entirely miss the point, and not address any of the privacy concerns related to user’s personal information and interests. In the US, 4 Senators have asked the FTC to develop guidelines governing the way social networking sites can handle user information, and I seriously doubt this latest move will get Zuckerberg many brownie points.  However Facebook really needs to start taking these privacy concerns seriously or they may find their users flock to Diaspora* in the Fall… (eh, maybe not).

Revenge of the Nerds Against Facebook

Another article in the NY Times yesterday dealing with Facebook and privacy. Four students at NYU are fed up with Facebook and other centralized networks that flout user privacy. They are building Diaspora, touted as a personally controlled, do-it-all, distributed social network that will be open sourced. Based on the response–and funding–they’ve gotten, it seems many others are also desperate for social networking they can fully control without the tradeoff of surrendering personal information to large networks and central warehouses.

The Evolution of Privacy on Facebook

Facebook has done a poor job of helping its users control their privacy. What started off as a service where most user activity was private has gradually evolved into a service where almost everything users do is public. What makes this worse from a consumer protection perspective is that in most cases Facebook has just changed the default access levels themselves without permission from users.

Here is a great infographic which illustrates the evolution of privacy on Facebook over the years. Privacy controls on Facebook are a mess. Even the most technically advanced users can’t always understand what the settings mean. They need something like this to help users decide how their activity is shared with the world. But, it’s unlikely that Facebook is going to make it any easier for their users to control their privacy given that it does not make business sense.

Is consumer’s privacy protected by consumer protection policies?

Alex Kantchelian, Dhawal Mujumdar & Sean Carey

FTC Policies on Deception and Unfairness

These two papers outline the FTC’s policies on cracking down on consumer unfairness and deception. The FTC policies are defined from several court cases that influenced consumer protection. However, no single statement on consumer unfairness and deception had been issued up to that point by the FTC.


The FTC policy statement on Unfairness


The FTC is responding to a letter by Senators Danforth and Ford, concerning one aspect of the FTC’s jurisdiction over “unfair or deceptive acts or practices.” The senate subcommittee is planning to hold hearings on the concept of “unfairness” as applied to consumer transactions

The FTC states that the concept of consumer unfairness is not immediately obvious and this uncertainty is troublesome for some business and members of the legal profession. They attempt to delineate a concrete framework for future application of the FTC’s unfairness authority. However, from court rulings, the FTC has boiled down unfair acts or practices in affecting commerce into three categories: consumer injury, violating established public policy, or it is unethical or unscrupulous.

Consumer Injury

The commission is concerned with substantive harms, such as monetary harm and unwarranted health and safety risks. Emotional effects tend to not ‘make the cut’ as evidence of injury. The injury must not be outweighed by any offsetting of the consumer or competitive benefits that the sales practices also produces, i.e. the item producer can justify not informing the consumer if it saves the consumer money. However, if sellers adopt a number of practices that unjustifiably hinder free market decisions, it can be considered unfair. This includes over coercion, or exercising undue influence over highly susceptible purchasers.

Violation of public policy

Violation of public policy is used by the FTC as a means of providing additional evidence on the degree of consumer injury caused by specific practices. The S&H court considered it as a separate consideration. The FTC thinks its important to examine outside statutory policies and established judicial principles for assistance in helping the agency

Unethical or unscrupulous conduct

Unethical or unscrupulous conduct is used for certainty in reaching all the purposes of the underlying statue that forbids “Unfair” acts or practices. The FTC has decided that though this is largely duplicative, because truly unethical or unscrupulous conduct will almost always injure customers or violate public policy as well.

Summary of FTC policy statement on deception

Section 5 of the FTC act declares unfair or deceptive acts or practices unlawful. Section 12 specifically prohibits false ads. There is no single definitive statement of the Commission’s authority on deceptive acts.

Summary:

The FTC does not have any single, definitive statement of their authority on deceptive acts. However, they have an outline for the basis of a deception case: It must have misrepresentation, omission or practice that is likely to mislead the customer, false oral or written representations, misleading price claims, sales of hazardous or systematically defective products or services without adequate disclosers or similar issues. Second, the FTC examines the practice from the perspective of a consumer acting reasonably in the circumstances and third, the FTC looks if the representation, omission or practice is a material one. Most deception involves written or oral misrepresentations, or omission of material information and generally occurs in other forms of conduct associated with a sales transaction. Advertisements will also be considered when dealing with a case of deception. The commission has also found deception where a sales representative misrepresented the purpose of the initial contact with customers.

Part 2, There Must be a Representation, Omission or Practice that is likely to mislead the consumer.

Most deception involves written or oral misrepresentation, or omissions of material information. The Commission looks for both expressed and implied claims, the latter determined through an examination of the representation itself. In some cases, consumers can be presumed to reach false beliefs about products or services because of omissions. The commission can sometimes reach these claims, but other times may require evidence of a consumers’ expectations.

Part 3, The act or practice must be considered from the perspective of the reasonable consumer.

Marketing and point-of-sales practices such as bait and switch cases that can mislead consumers are also deceptive. When a product is sold, there is an implied representation that the product is fit for the purpose for which it is sold, if not then it is considered deceptive. Additionally, the FTC will take special consideration to the needs of specific audiences, for example: vulnerable audiences such as the terminally ill, the elderly and young children. The FTC takes into consideration how the consumer will interpret claims by advertisements and written material. They will avoid cases with ‘obviously exaggerated or puffing representations’ that consumers would not take seriously. Also, the Commission notes that it sees little incentive to deceive consumers for products that are inexpensive or easy to evaluate such as consumables (toilet paper, soap, etc). The commission will look at the practice closely before issuing a complaint based on deception. The FTC takes into account the entire advertisement, transaction or course of dealing  and how the consumer is likely to respond. The FTC considers the entire “mosaic” in addition to materiality

Part 4, the representation, omission or practice must be material

The third major element that the FTC considers is the materiality of the representation. The FTC considers a “material” as information that affects the consumer’s choice or conduct. This “material” can be concern purpose, safety, efficacy, or cost. If the commission cannot find material evidence that there is deception, the commission will seek evidence that the omission of material is important to consumers.

Conclusion:

The Commission works to find acts or practices that it considers deceptive if there is a misrepresentation, omission or other such practices that could harm consumers. Although the commission does not require extrinsic material evidence, but in certain situations such evidence might be necessary.

Sears Holdings Management Corporation Case

Sometimes you wonder whether all these commissions like Federal Trade Commission are there for namesake only. But when you look at the recent case involving Sears Holdings Management Corporation, then you realize their importance. The principle mission of Federal Trade Commission (FTC) is “consumer protection” and prevention of “anti-competitive” business practices. And in this case they precisely stick to their core mission and once again prove their worth.

Sears Holding Management Corporation (“respondent” or “SHMC”), a subsidiary of Sears Holding Corporation. SHMC  handles marketing operations for the Sears Roebuck and Kmart retail stores, and operates the sears.com and kmart.com retail internet websites.
From on or about April 2007 through January 2008, SHMC disseminated via the internet a software application for consumers to download and install onto their computers, This application was created, developed, and managed for SHMC by a third party in connection with SHMC’s “My SHC Community” market research program. The application, when installed, runs in background at all times on consumers’ computers and transmits tracked information, including nearly all of the internet behavior that occurs on those computers, to servers maintained on behalf of SHMC. Information collected and transmitted included all the web browsing, secure sessions, checking online accounts, and use of web-based email and instant messaging services.
If you are angered and aghast with the level of encroachment into the privacy of consumers then hold on to your seat, its just the beginning. SHMC didn’t mention all the details about their application and what it was going to collect in their “click-wrap” license or their privacy policies. Fifteen out of hundred visitors to sears.com and kmart.com websites presented with a “My SHC Community” pop-up box. This pop-up box mentioned the purpose and benefits joining of “My SHC Community”. But it made no mention of the software application (“the application”). Likewise, general “Privacy Policy” statement accessed via the hyperlink in the pop-up box did not mention the application. Furthermore, the pop-up box message invited consumers to enter their email address to receive a follow-up email from SHMC with more information. Subsequently, invitation messages were emailed to those consumers who supplied their email address. These invitation messages described what consumers would receive in exchange for becoming member of the “My SHC Community”. Consumers who wished to proceed were asked to click the “Join Today” button at the bottom of the message.
After clicking “Join Today” button in the email, consumers were directed to a landing page that restated many of the representations about the potential interactions between members and the “community”. However, landing page did not mention anything about the application. There was one more “Join Today” button on the landing page. Consumers who clicked on this button were directed to registration page. To complete the registration, consumers needed to enter their name, address, age, and email address. Below the fields of entering information, the registration page presented a “Privacy Statement and User License Agreement” (PSULA) in a scroll box that displayed ten lines of multi-page document at a time.
A description of the software application (that was going to get installed) begins on approximately the 75th line down in the scroll box. That means consumer had to navigate through seven pages to read this information. This description involved the information about internet usage. It also mentioned about various activities of it was going to monitor. Even though the PSULA had information about the activities it was going to monitor, it was still ambiguous about what this application was actually going to do. For example, it was mentioned that this application will monitor the collected information for better understanding of their consumers and their household but it didn’t mention what SMHC meant by monitoring. Was this monitoring done by automatic programs or someone manually? The PSULA did not mention about any specific information that was monitored. They also mentioned that their application might examine the header information of the instant/e-mail messages of their consumers. PSULA also described how the information that application would collect was transmitted to SHMC’s servers, how it might be used and how it was maintained. Lastly it clearly stated that PSULA reserved the right to continue to use information collected. At the end, it asked consumers to accept these terms and conditions and those who accepted these terms and conditions were directed to an installation page that explained downloading and installation instructions for the application. Installation page didn’t give any information about the application. When installed, the application worked and transmitted information substantially as described in PSULA.
The tracked information included not only information about websites consumers visited and links that they clicked but also text of secure pages, such as online banking statements, online drug prescription records, select header files that could show the sender, recipient, subject and size of web-based email messages etc.
We believe the level of encroachment into the privacy of consumers was not only blatant but also shocking. It failed to disclose adequately that the software application when installed would nearly monitor all the internet behavior and activities on the consumers computers. Thus, this failure to disclose various facts and information was nothing but deceptive practice as discussed in FTC’s policy about deception.

Understanding privacy under FTC and OECD consumer protection policies


Precisely and exhaustively defining the concept of privacy is a challenging problem. For starters, the Merriam-Webster defines one’s right to privacy as the “freedom from unauthorized intrusion”. How inclusive is this definition?
As suggested, we often contrast privacy with being spied on – someone collecting and possibly disclosing data about us, without our knowledge or consent. The FTC policy on unfairness would a priori seem naturally suited to the task. To be unfair, the privacy breach has to be a practice that injuries the consumer. Can we establish injury in a general privacy breach case? Unfortunately, the requirements do not look extremely promising. First, privacy breach must have substantial effect, namely lead to monetary or physical harm for the consumer: “more subjective types of harm are excluded” [FTC on unfairness]. There is usually no directly observable monetary or physical harm when a privacy breach occurs, with the exception of a few cases which tend to receive massive media-coverage, such as the murder of Rebecca Schaeffer, where a stalker obtained the actress home address through the California DMV records.  Second, the net value of the privacy breach has to be considered: possibly good outcomes balance the gravity of the injury. So, trading privacy in exchange of cash has good chances to actually play in your favor before the FTC committee (and your department store fidelity card does just that). Third, the privacy breach has to be unavoidable to the consumer. This obviously happens with an information hiding manufacturer [Sony BMG rootkit incident], but it does not need to be the case [Sears Holdings case] in order to result in a huge privacy disaster.
The FTC statement on unfairness is thus not so well suited for privacy protection purposes. What about the statement on deception? Oddly, it turns out that one can take the problem by a somewhat idiosyncratic angle: alleging misleading privacy expectations regarding a given product[Sears Holdings case]. What is surprising is the fact that privacy is treated as any other product feature, so that we are never really talking that much about privacy rather than misrepresenting and deceptive practices. Moreover, in the analysis of the likelihood of deception, one implicitly relies on unstated privacy expectations of reasonable consumers. The problem is that even reasonable consumers may not have enough technical knowledge to understand privacy issues in today’s highly complex world of softwares, so that the very foundations of reasonable expectations and the analysis of effects on the targeted audience are deeply weakened.
Privacy, understand as the right to be left alone is, at most, moderately well served by the FTC consumer protection policies. Unfortunately, in the information-intensive world, it also seems that a lot of “non-intrusive” data processing naturally falls into our understanding of privacy. For example, one’s ability to inspect her stored personal data on relevant systems, or one’s right to have her personal information data well secured from malevolent outsiders are pretty basic privacy requirements, which are not covered by our leading definition.
Interestingly, the OECD has pointed to some of those issues in its Guidelines on Protection of Privacy and Transborder Flows of Personal Data. In a tussle between free-flow of information which is economically benefic and privacy for the protection of the consumer, the OECD suggests 7 principles to be enforced by its members. Data collecting limitation, data quality for restraining data collection to only the relevant to the purpose it is been collected, purpose specification before the time of collection, security safeguards for protecting collected data, openness which is readily available purposes and nature of the collected data, individual participation for avoiding the tyranny of a blind administration, and finally accountability.
In France for instance, the CNIL (the National Commission of Computerized Systems and Liberties) implements these recommendations since 1978 (thus, ahead of OECD’s guidelines), albeit not without several criticisms, ranging from the quality of the decisions reached, decisions being often in favor of governmental actions, to its painfully long processes because of the overwhelming number of submitted requests and cases before this relatively small administrative organ.

Twitter’s Geolocation Feature

There has been some news over the last few days about Twitter finally turning on its geolocation feature.  It allows you to see a map overlaying individual tweets together with place names and the location of the tweet. Though the feature has been live via the twitter API since last fall (so it’s not “breaking” news), this week it was finally turned on.  Facebook is also expected to turn on geolocation in the near future.  Though the Twitter service is opt-in (what a Google Buzz-like story it would have been otherwise!), and there are a number of really cool/useful things that twitter+location could bring (Another way for impromptu meetups with friends! Deals from the store on the corner!), some people (here, here, here are a few) are raising privacy concerns.  There aren’t any current tools for controlling who actually sees your location (uh oh), and scenarios like tweeting while you’re on vacation and getting burglarized , more tools for stalkers, employers going all big brother on their employees, etc. could happen.  Some food for thought as we talk more about technology and privacy issues in lecture.

Do Not Call/No Trespassing

After reading the OPG v. Diebold case, I couldn’t help feeling bad for the poor Diebold employees whose home addresses and cell phone numbers were published in that email archive.  If publishing the archive falls under the fair use doctrine, does that mean that there is no issue with publishing their personal contact information also?  Or is that information considered factual, something that can be discovered?  Is a cell phone number considered private information?  What if a reader of the archive uses those home addresses to find and harass the employee(s)?  Is there any remediation is available to those Diebold employees??  This just seemed like an interesting intersection of Fair Use, Facts, and Privacy to me.

Facts and Facebook

by Sean Marimpietri, Ian McDowell, and Julian Nunez

This week’s readings focused on the distinction between facts and creative works and explored the legal implications of using facts without consent. In this post, we will explore the application of these principles to current developments in the realm of social networking, taking Facebook as our subject.

Control of Content

The internet makes the once onerous process of content creation and distribution significantly easier. Popular blogging platforms like Blogger and WordPress make it easy for users to create content and make it available to anyone in the world. Social networks like Facebook and “microblogging” services like Twitter make it trivial for individuals to express themselves with minimal effort.

Presumably, much of the original work a user produces through their use of a social networking site or blogging platform are copyright by the user; the Berne Convention (to which the US subscribes) dictates that copyright is automatic when a work becomes “fixed in some material form,” of which an electronic medium seems to apply. Though the average Facebook user may never reflect on the legal ownership of the things they post on their “Wall,” the law suggests that they do in fact own the copyright to any such posts that would be considered authored by the user.

Services like Facebook have Terms of Service (TOS) that require users to grant them a license to reproduce the copyright work you post on those sites; they would not be able to function without some such license. The judgement in ProCD v Zeidenberg established that the “click to accept” license model is binding:

A buyer accepts goods under sec. 2-606(1)(b) when, after an opportunity to inspect, he fails to make an effective rejection under sec. 2-602(1). ProCD extended an opportunity to reject if a buyer should find the license terms unsatisfactory

As in the ProCD case, Facebook users are presented with terms of service which they must accept before using the product. Presumably, this means the Facebook TOS is legally sound. For our discussion, the section of this TOS relating to copyright content reads:

For content that is covered by intellectual property rights, like photos and videos (“IP content”), you specifically give us the following permission, subject to your privacy and application settings: you grant us a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (“IP License”). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

There are a couple of interesting points to unpack from this statement:

1) When you post something on Facebook, you grant Facebook permission to use it however they like, and you allow them to give that same permission to anyone else, at their discretion.

2) You can (and likely do) effectively lose control over a lot of your content once you share it with others. This is still better than the change to the TOS Facebook proposed in February of last year, which effectively would have had you granting them a license in perpetuity

3) The first clause makes the distinction between “IP content” and other content. In effect they are asserting that some of the information you post is not going to be covered by copyright. Exposition in Feist v. Rural gives us some idea of what these items might be; namely: facts. According to the case, “facts do not owe their origin to an act of authorship, they are not original, and thus are not copyrightable.” The Facebook policy page on privacy helps to connect the dots:

Certain categories of information such as your name, profile photo, list of friends and pages you are a fan of, gender, geographic region, and networks you belong to are considered publicly available, and therefore do not have privacy settings.

Facebook has explicitly designated some of your content as being outside of your control, and presumably outside of your copyright. The term “publicly available” seems to be Facebook’s analog of what the cases we read labeled “facts.”

Facebook’s Use of Facts

Despite the present wide familiarity with Facebook, for the purpose of our conversation it could be useful to take a step back and have a fresh look at this free, web-based service. There are a few different categories of facts users disclose in the course of using Facebook. First, there is the required registration data, which includes name, email, gender and birth date. Second, users may optionally turn over additional personal information, such as previous schools attended, employers, relationship status and more.

In addition to these two categories, one could look at the types of facts at play from the language of Facebook’s own policy. As mentioned in the previous section, Facebook treats some of the information a user turns over as not constituting “IP content” and thus not covered by copyright. Going by the definition of facts presented in Feist v. Rural, we would assume that such content does not owe its origin to an act of authorship and would not be considered a creative work.

However, in Facebook’s privacy policy it is indicated that Facebook considers a user’s profile photos to be in a category of information that is “publicly available.” Yet the terms of service explicitly call out “photos” as falling under the heading of IP content. Their treatment then is inconsistent; they are either not giving users the promised control over their intellectual property, or they are making the argument that profile pictures are a special class of pictures that are not subject to copyright. How does it make sense for Facebook to consider a user’s profile photos to be of one category of information and an uploaded photo album to be of another? In what ways are Facebook’s categories of information that is “IP content” and information that is “publicly available” irreconcilable?

There are several cases one can think of where this discrepancy doesn’t make sense. A carefully composed profile photo of artistic value would seem like a natural fit for copyright protection. Similarly, some users upload profile photos with such a frequency that the collection of all their profile photos as a whole would could function as a photographic-documentary account of their lives over the years. In both cases, would it not make more sense for a user’s profile photos to be considered “IP content”?

There are other corners of Facebook where the copyright question comes into play. Facebook affords users the ability to make short text updates that answer the question: “What’s on your mind?” These “status updates” serve as a prime example of text that could alternately be considered fact or fiction. An update such as “Bob is at the zoo” seem unambiguously factual, while updates such as “This poem’s for my mom / I like her a lot / she’s better than Tom” would seem to meet the criteria for “authorship.” One can easily imagine other hybrid examples that might be subject to copyright as a whole, yet contain facts that are not subject to copyright, much in the manner of a telephone book. Facebook’s stance on how they are able to use such work is not explicit, and it is unclear what rights they claim to the use of information found in updates.

Privacy Implications

Given that at least some of the information Facebook users turn over is treated as “factual” and distributed widely and without restriction, it is clear that users do not have a high degree of control over all the data they provide. What do these two conditions imply from the perspective of users’ privacy?

People willingly provide information to Facebook for different purposes. They share some facts with the world in order to be easily contacted by their friends (such as name, email, school, and hometown). Also, they post information just to be shared with their friends (pictures, videos, status updates, relationship status, and so on). Finally, some users also provide information to be rewarded or publicly recognized (i.e. cities I’ve visited, Farmville, and different games). They willingly provide this information for a specific purpose and share it with a particular audience. Nevertheless, this information is not protected from other uses and could produce unintended results when aggregated or data-mined.

Public information on Facebook or other sites could be used as a source of privacy invasion. Some Facebook users have already been affected by facts and data shared in social networks. For example, persons having affairs or flirting on the web have been detected by their partners . Data from social networks has been used as evidence in divorce cases and job dismissals. Also, some companies and individuals have developed methods and techniques to use information from social networks such as Facebook to detect patterns of behavior. These methods are used to determine someone’s sexual orientation or cheating behavior.

What kind of protection exists against these privacy issues? Apparently, not much. According to Facebook policy:

[Facebook] cannot control the actions of other users with whom you share your information. We cannot guarantee that only authorized persons will view your information. We cannot ensure that information you share on Facebook will not become publicly available. We are not responsible for third party circumvention of any privacy settings or security measures on Facebook.

Also, as noted in this post, users do not have many judicial means to prevent this type of actions. Facts are not subject to copyright and users are limited by the TOS of a website once they click “I Accept”.

Moreover, even users who do not willingly provide information or who do not sign up to any website and accept their terms are still subject to privacy invasion. After the Boring case, facts such as addresses and images of a house are considered public domain. Companies such as Google, Facebook or individuals could aggregate this data for their particular interests. After all, even Facebook founder Mark Zuckerberg has suffered from an invasion of privacy.

Facebook is just one example showing the changes in the scope of information people disclose and the manner in which they create works. At the same time, the methods of aggregating and processing disclosed data are growing more powerful. The intersection of these trends portends serious ramifications with respect to privacy and copyright. While the legal cases we covered certainly suggest how existing law might be extended to cover these changes, it may be that more specific laws (or new rulings) are needed to establish healthy legal norms for internet services.