Facebook adds two privacy tools…the latest

Most recently in the heated privacy controversy that surrounds Facebook, Information Week reports that two new features have been added to improve privacy.  Both tools are focused on preventing unknown machines from logging on to your facebook account.

See here: http://www.informationweek.com/news/storage/data_protection/showArticle.jhtml?articleID=224800027&subSection=Privacy

However this seems to entirely miss the point, and not address any of the privacy concerns related to user’s personal information and interests. In the US, 4 Senators have asked the FTC to develop guidelines governing the way social networking sites can handle user information, and I seriously doubt this latest move will get Zuckerberg many brownie points.  However Facebook really needs to start taking these privacy concerns seriously or they may find their users flock to Diaspora* in the Fall… (eh, maybe not).

UC student notification of possible identity theft

Some of you may have received an email yesterday (4/5/10) from our ASUC President Miguel Daal (subject: Vote April 6 – 8 ASUC Elections, April 24 Boat Cruise & Credit/Debit):

————————————————————–
Credit/Debit Card Fraud:
A wave of credit/debit card fraud has hit UC students in the last two
weeks. Carefully check your recent credit and debit card statements,
and encourage your friends to do the same. If you are a victim, it
is *very important* that you file a police report with the City of
Berkeley PD so that the source of the credit card number leak can
be found. A report takes 5 minutes: call (510) 981-5900 to talk to
an officer.
————————————————————–

Is TJX history is repeating itself? (albeit on a smaller scale); the banks have complained of many UC students have to report stolen credit card, identity theft, etc, and are trying to identify the source of the leak. This trace connection is obviously really difficult to determine and I wonder what the culprit’s fate will be. I’m curious if they recorded there information in clear text.

Make sure to check that you are not a victim!

It’s also interesting to consider the effect of this email notification. At a basic level, the overall awareness of this problem is not just limited to the small subgroup of student victims, but now the entire student population has been notified. I wonder if any groups will band together and protest or boycott whatever company/store/website/party (to be identified) lost their personal information.

ControlScan and the FTC: We Don’t Need Your Stinkin’ Badges

If you think technology can solve your security problems, then you don’t understand the problems and you don’t understand the technology.” – Bruce Schneier

One of the reasons security is so tough is because it’s next to impossible to make guarantees. People incorrectly think security is a binary property – Linux is secure, Windows isn’t, etc. You can show a system isn’t secure by finding a vulnerability. But to guarantee a system is “secure”, without further explanation, is meaningless. Unless you have an unlimited budget, 100 percent security is not an attainable or even desirable goal.

Imagine you’re a busy executive. You’ve got a budget, a timeline, and your job to keep. Your boss/shareholders/board is concerned about the security of your systems. Someone is trying to sell you a magical piece of technology that gives you “guaranteed” security. You may be tempted to buy. And you would be wasting your money.

It’s extremely common – if you walk through the RSA or Blackhat conference exhibition halls, you’ll find hundreds of companies marketing comprehensive security “solutions” that rely on deceptive claims and glossy packaging.

It’s why so many people in the security community rally against “security badge” services for web sites. Companies will sell you the right to display an image that lets your customers know your site is “secure”. McAfee Secure, TrustGuard, and Shopper Safe are just a few of the competitors. Most of these companies rely on automation and simple security scanning technologies. Many security researchers have found these services to be inadequate. There are even reports of attackers specifically targeting sites that relied on these services.

ControlScan, one of these web site scanning services, recently reached a settlement with the FTC agreeing that it misled customers about the frequency and effectiveness of its scans. ControlScan offered many badges with “little or no verification” of actual security practices. And while the badge showed the current date, the scans were often weekly or less frequent.

It will be interesting to follow the ramifications of this agreement to see how it impacts other security vendors. Many security companies rely on vague claims about the effectiveness of their products because it’s hard to sell incremental risk protection. Most buyers want immediate and complete solutions. When do bold claims become deception? Whose responsibility is it to verify the efficacy of a security technology? The FTC has made a stand against an obvious shyster, but will the “snake oil salesmen” of the security industry be shut down?

Information Security and the Law

Alex Smolen, Krishna Janakiraman, Satish Polisetti, Daniel Perry

TJ Maxx, a retail apparel company, failed to secure their customers’ private information. In-store networks transmitted sensitive data like credit card numbers and social security numbers to corporate networks in plain text, had inadequate authentication and authorization controls, and had no intrusion detection or prevention mechanisms. As a result, at least two security breaches occurred in 2005 and 2006 that caused millions of dollars of losses in money and time to customers and banks. In response to these breaches, the FTC deemed TJ Maxx’s lack of security an “unfair” business practice and reached an agreement with TJ Maxx to prevent future breaches. This agreement instructs TJ Maxx to use encryption before storing data, put dedicated employees in charge of an information security program, use better password systems, and submit to regular third-party audits. TJ Maxx agreed to this order and not penalized in any other way.

The TJ Maxx security breaches caused significant monetary loss to consumers and businesses. What law or set of laws can be used to hold TJ Maxx and similar companies liable? Why didn’t TJ Maxx have good security systems in the first place? There are several laws that relate to information security, but it is difficult for businesses to understand how to follow them and there is not always adequate definition and enforcement.

An example of a law related to information security is CA State Bill 1386 which has subsequently been adopted in a similar form by most other states. In 2003, California SB 1386 amended civil codes 1798.29, 1798.82, and 1798.84 and introduced new privacy regulations that require that any organization that does business in California and stores unencrypted personal information to notify any California resident if their personal information was acquired by an unauthorized third party. Protected personal information includes social security numbers, financial information, as well as medical and health insurance information. Notification can be via mail or electronic, but must be given as quickly as possible. While there are no specified monetary penalties for violating these codes, an injured person can file a civil suit against a company. This law does provide a punitive mechanism for companies that fail to secure their information systems, but it focuses on breaches. If a company is unaware of a breach, or believes that no one will be able to win a suit claiming injury against them, than they may choose not send a notification. Alternatively, a company could decide that sending notifications is less expensive than implementing an effective information security program.

The California Office of Privacy Protection released recommended practices related to these new civil codes in May of 2008. The practices detail ways for an organization to manage an information security program centered around restricting internal access to personal information and notification to individuals or groups if there is a security breach of this information. Practices include allowing employees access to personal information on a ‘need to know’ basis, notifying individuals of a breach within ten days by first class mail or email, using encryption standards, and reviewing security standards annually. Implementation of all of the recommendations will almost undoubtedly be extremely costly for any business and the fact that they are not legally binding (i.e. no Safe Harbor) provides little assurance. The recommendations are also fairly vague, and there is no regulatory action enforcing compliance. They are simply “recommendations”.

The FTC consent agreement with TJ Maxx as well as SB 1386 and the associated recommendations demonstrate legal approaches to information security enforcement. The goal of these mechanisms is to ensure privacy and prevent security breaches in the future. However, each of these approaches is problematic and it is very hard to imagine a legal framework that can ensure information security. In his paper “The State of Information Security Law”, Smedinghoff mentions that there is no single statue that obligates a company to secure its information. Instead, there is a hodgepodge of federal, state and international laws that pertain to information security. Furthermore, these laws are segregated based on public versus private companies as well as by industry – finance, health care, e-commerce, etc. Even if a company wants to secure its information, it does not have clear set of legal obligations to work towards. Companies may be subject to several different information security laws based on their industry, the data they store, and the states they do business in.

Another major challenge for information security law is the ambiguity of the concept of “information security”. As Smedinghoff mentions, security is relative and the terms used in some of the statues are often hard to pin-down phrases like “reasonable (or) appropriate security”. This leaves businesses with little guidance as to what is required for legal compliance. A welcome trend is the emergence of laws that focus on the treatment of specific information like social security numbers or payment cards information as well as specific standards related to security controls like data retention or authentication.

From a company’s point of view, it is difficult to determine “what is applicable to us” when it comes to information security law. This could be an economic strain, especially for small companies that can’t afford legal or information security expertise. There is also an increasing trend towards outsourcing information systems to the “cloud”. This represents another unclear legal area – the obligations and liabilities of the company and the cloud provider. We have seen in other areas like privacy and copyright that technology moves extremely rapidly compared to the law. New technology presents new information security threats, which the law may not address.

There is no omnibus US information security legislation for businesses – in fact, it is almost the exact opposite. There are a variety of different laws related to different sectors and types of data. Even for large organizations, implementing an effective information security program to address these laws is challenging and costly. Information security is a big expense for many organizations, and yet overall security controls are still often bad. TJ Maxx is not alone – see Heartland, Card Systems, and Hannaford Brothers for other examples of large-scale breaches. As Smedingham states: “A key problem, however, is that the nature of the legal obligation to address security is often poorly understood by those levels in management charged with the responsibility, by the technical experts who must implement it, and by the lawyers who must ensure compliance. Yet, it is perhaps one of the most critical issues companies will face.” It seems that increased clarity, or potentially unification, of information security law would help improve the state of information security by giving businesses a clear objective, and if this objective included appropriate information security policy and controls, it would ultimately help the consumer.

Is consumer’s privacy protected by consumer protection policies?

Alex Kantchelian, Dhawal Mujumdar & Sean Carey

FTC Policies on Deception and Unfairness

These two papers outline the FTC’s policies on cracking down on consumer unfairness and deception. The FTC policies are defined from several court cases that influenced consumer protection. However, no single statement on consumer unfairness and deception had been issued up to that point by the FTC.


The FTC policy statement on Unfairness


The FTC is responding to a letter by Senators Danforth and Ford, concerning one aspect of the FTC’s jurisdiction over “unfair or deceptive acts or practices.” The senate subcommittee is planning to hold hearings on the concept of “unfairness” as applied to consumer transactions

The FTC states that the concept of consumer unfairness is not immediately obvious and this uncertainty is troublesome for some business and members of the legal profession. They attempt to delineate a concrete framework for future application of the FTC’s unfairness authority. However, from court rulings, the FTC has boiled down unfair acts or practices in affecting commerce into three categories: consumer injury, violating established public policy, or it is unethical or unscrupulous.

Consumer Injury

The commission is concerned with substantive harms, such as monetary harm and unwarranted health and safety risks. Emotional effects tend to not ‘make the cut’ as evidence of injury. The injury must not be outweighed by any offsetting of the consumer or competitive benefits that the sales practices also produces, i.e. the item producer can justify not informing the consumer if it saves the consumer money. However, if sellers adopt a number of practices that unjustifiably hinder free market decisions, it can be considered unfair. This includes over coercion, or exercising undue influence over highly susceptible purchasers.

Violation of public policy

Violation of public policy is used by the FTC as a means of providing additional evidence on the degree of consumer injury caused by specific practices. The S&H court considered it as a separate consideration. The FTC thinks its important to examine outside statutory policies and established judicial principles for assistance in helping the agency

Unethical or unscrupulous conduct

Unethical or unscrupulous conduct is used for certainty in reaching all the purposes of the underlying statue that forbids “Unfair” acts or practices. The FTC has decided that though this is largely duplicative, because truly unethical or unscrupulous conduct will almost always injure customers or violate public policy as well.

Summary of FTC policy statement on deception

Section 5 of the FTC act declares unfair or deceptive acts or practices unlawful. Section 12 specifically prohibits false ads. There is no single definitive statement of the Commission’s authority on deceptive acts.

Summary:

The FTC does not have any single, definitive statement of their authority on deceptive acts. However, they have an outline for the basis of a deception case: It must have misrepresentation, omission or practice that is likely to mislead the customer, false oral or written representations, misleading price claims, sales of hazardous or systematically defective products or services without adequate disclosers or similar issues. Second, the FTC examines the practice from the perspective of a consumer acting reasonably in the circumstances and third, the FTC looks if the representation, omission or practice is a material one. Most deception involves written or oral misrepresentations, or omission of material information and generally occurs in other forms of conduct associated with a sales transaction. Advertisements will also be considered when dealing with a case of deception. The commission has also found deception where a sales representative misrepresented the purpose of the initial contact with customers.

Part 2, There Must be a Representation, Omission or Practice that is likely to mislead the consumer.

Most deception involves written or oral misrepresentation, or omissions of material information. The Commission looks for both expressed and implied claims, the latter determined through an examination of the representation itself. In some cases, consumers can be presumed to reach false beliefs about products or services because of omissions. The commission can sometimes reach these claims, but other times may require evidence of a consumers’ expectations.

Part 3, The act or practice must be considered from the perspective of the reasonable consumer.

Marketing and point-of-sales practices such as bait and switch cases that can mislead consumers are also deceptive. When a product is sold, there is an implied representation that the product is fit for the purpose for which it is sold, if not then it is considered deceptive. Additionally, the FTC will take special consideration to the needs of specific audiences, for example: vulnerable audiences such as the terminally ill, the elderly and young children. The FTC takes into consideration how the consumer will interpret claims by advertisements and written material. They will avoid cases with ‘obviously exaggerated or puffing representations’ that consumers would not take seriously. Also, the Commission notes that it sees little incentive to deceive consumers for products that are inexpensive or easy to evaluate such as consumables (toilet paper, soap, etc). The commission will look at the practice closely before issuing a complaint based on deception. The FTC takes into account the entire advertisement, transaction or course of dealing  and how the consumer is likely to respond. The FTC considers the entire “mosaic” in addition to materiality

Part 4, the representation, omission or practice must be material

The third major element that the FTC considers is the materiality of the representation. The FTC considers a “material” as information that affects the consumer’s choice or conduct. This “material” can be concern purpose, safety, efficacy, or cost. If the commission cannot find material evidence that there is deception, the commission will seek evidence that the omission of material is important to consumers.

Conclusion:

The Commission works to find acts or practices that it considers deceptive if there is a misrepresentation, omission or other such practices that could harm consumers. Although the commission does not require extrinsic material evidence, but in certain situations such evidence might be necessary.

Sears Holdings Management Corporation Case

Sometimes you wonder whether all these commissions like Federal Trade Commission are there for namesake only. But when you look at the recent case involving Sears Holdings Management Corporation, then you realize their importance. The principle mission of Federal Trade Commission (FTC) is “consumer protection” and prevention of “anti-competitive” business practices. And in this case they precisely stick to their core mission and once again prove their worth.

Sears Holding Management Corporation (“respondent” or “SHMC”), a subsidiary of Sears Holding Corporation. SHMC  handles marketing operations for the Sears Roebuck and Kmart retail stores, and operates the sears.com and kmart.com retail internet websites.
From on or about April 2007 through January 2008, SHMC disseminated via the internet a software application for consumers to download and install onto their computers, This application was created, developed, and managed for SHMC by a third party in connection with SHMC’s “My SHC Community” market research program. The application, when installed, runs in background at all times on consumers’ computers and transmits tracked information, including nearly all of the internet behavior that occurs on those computers, to servers maintained on behalf of SHMC. Information collected and transmitted included all the web browsing, secure sessions, checking online accounts, and use of web-based email and instant messaging services.
If you are angered and aghast with the level of encroachment into the privacy of consumers then hold on to your seat, its just the beginning. SHMC didn’t mention all the details about their application and what it was going to collect in their “click-wrap” license or their privacy policies. Fifteen out of hundred visitors to sears.com and kmart.com websites presented with a “My SHC Community” pop-up box. This pop-up box mentioned the purpose and benefits joining of “My SHC Community”. But it made no mention of the software application (“the application”). Likewise, general “Privacy Policy” statement accessed via the hyperlink in the pop-up box did not mention the application. Furthermore, the pop-up box message invited consumers to enter their email address to receive a follow-up email from SHMC with more information. Subsequently, invitation messages were emailed to those consumers who supplied their email address. These invitation messages described what consumers would receive in exchange for becoming member of the “My SHC Community”. Consumers who wished to proceed were asked to click the “Join Today” button at the bottom of the message.
After clicking “Join Today” button in the email, consumers were directed to a landing page that restated many of the representations about the potential interactions between members and the “community”. However, landing page did not mention anything about the application. There was one more “Join Today” button on the landing page. Consumers who clicked on this button were directed to registration page. To complete the registration, consumers needed to enter their name, address, age, and email address. Below the fields of entering information, the registration page presented a “Privacy Statement and User License Agreement” (PSULA) in a scroll box that displayed ten lines of multi-page document at a time.
A description of the software application (that was going to get installed) begins on approximately the 75th line down in the scroll box. That means consumer had to navigate through seven pages to read this information. This description involved the information about internet usage. It also mentioned about various activities of it was going to monitor. Even though the PSULA had information about the activities it was going to monitor, it was still ambiguous about what this application was actually going to do. For example, it was mentioned that this application will monitor the collected information for better understanding of their consumers and their household but it didn’t mention what SMHC meant by monitoring. Was this monitoring done by automatic programs or someone manually? The PSULA did not mention about any specific information that was monitored. They also mentioned that their application might examine the header information of the instant/e-mail messages of their consumers. PSULA also described how the information that application would collect was transmitted to SHMC’s servers, how it might be used and how it was maintained. Lastly it clearly stated that PSULA reserved the right to continue to use information collected. At the end, it asked consumers to accept these terms and conditions and those who accepted these terms and conditions were directed to an installation page that explained downloading and installation instructions for the application. Installation page didn’t give any information about the application. When installed, the application worked and transmitted information substantially as described in PSULA.
The tracked information included not only information about websites consumers visited and links that they clicked but also text of secure pages, such as online banking statements, online drug prescription records, select header files that could show the sender, recipient, subject and size of web-based email messages etc.
We believe the level of encroachment into the privacy of consumers was not only blatant but also shocking. It failed to disclose adequately that the software application when installed would nearly monitor all the internet behavior and activities on the consumers computers. Thus, this failure to disclose various facts and information was nothing but deceptive practice as discussed in FTC’s policy about deception.

Understanding privacy under FTC and OECD consumer protection policies


Precisely and exhaustively defining the concept of privacy is a challenging problem. For starters, the Merriam-Webster defines one’s right to privacy as the “freedom from unauthorized intrusion”. How inclusive is this definition?
As suggested, we often contrast privacy with being spied on – someone collecting and possibly disclosing data about us, without our knowledge or consent. The FTC policy on unfairness would a priori seem naturally suited to the task. To be unfair, the privacy breach has to be a practice that injuries the consumer. Can we establish injury in a general privacy breach case? Unfortunately, the requirements do not look extremely promising. First, privacy breach must have substantial effect, namely lead to monetary or physical harm for the consumer: “more subjective types of harm are excluded” [FTC on unfairness]. There is usually no directly observable monetary or physical harm when a privacy breach occurs, with the exception of a few cases which tend to receive massive media-coverage, such as the murder of Rebecca Schaeffer, where a stalker obtained the actress home address through the California DMV records.  Second, the net value of the privacy breach has to be considered: possibly good outcomes balance the gravity of the injury. So, trading privacy in exchange of cash has good chances to actually play in your favor before the FTC committee (and your department store fidelity card does just that). Third, the privacy breach has to be unavoidable to the consumer. This obviously happens with an information hiding manufacturer [Sony BMG rootkit incident], but it does not need to be the case [Sears Holdings case] in order to result in a huge privacy disaster.
The FTC statement on unfairness is thus not so well suited for privacy protection purposes. What about the statement on deception? Oddly, it turns out that one can take the problem by a somewhat idiosyncratic angle: alleging misleading privacy expectations regarding a given product[Sears Holdings case]. What is surprising is the fact that privacy is treated as any other product feature, so that we are never really talking that much about privacy rather than misrepresenting and deceptive practices. Moreover, in the analysis of the likelihood of deception, one implicitly relies on unstated privacy expectations of reasonable consumers. The problem is that even reasonable consumers may not have enough technical knowledge to understand privacy issues in today’s highly complex world of softwares, so that the very foundations of reasonable expectations and the analysis of effects on the targeted audience are deeply weakened.
Privacy, understand as the right to be left alone is, at most, moderately well served by the FTC consumer protection policies. Unfortunately, in the information-intensive world, it also seems that a lot of “non-intrusive” data processing naturally falls into our understanding of privacy. For example, one’s ability to inspect her stored personal data on relevant systems, or one’s right to have her personal information data well secured from malevolent outsiders are pretty basic privacy requirements, which are not covered by our leading definition.
Interestingly, the OECD has pointed to some of those issues in its Guidelines on Protection of Privacy and Transborder Flows of Personal Data. In a tussle between free-flow of information which is economically benefic and privacy for the protection of the consumer, the OECD suggests 7 principles to be enforced by its members. Data collecting limitation, data quality for restraining data collection to only the relevant to the purpose it is been collected, purpose specification before the time of collection, security safeguards for protecting collected data, openness which is readily available purposes and nature of the collected data, individual participation for avoiding the tyranny of a blind administration, and finally accountability.
In France for instance, the CNIL (the National Commission of Computerized Systems and Liberties) implements these recommendations since 1978 (thus, ahead of OECD’s guidelines), albeit not without several criticisms, ranging from the quality of the decisions reached, decisions being often in favor of governmental actions, to its painfully long processes because of the overwhelming number of submitted requests and cases before this relatively small administrative organ.