Location, Location, Location: Trends tracking offline consumer geolocation data

Philz Coffee Marketing with Euclid Intro Video

Imagine picking up your morning coffee at Philz before class, and noticing the usual 8:45 a.m. rush has not kept you waiting the last few weeks like it did in the beginning of the semester. You also happily notice that Philz has started offering a new berry protein kale snack which is perfect for your post-yoga pick-me-up. It’s like Philz can read your mind, and you feel slightly guilty for Starbucks-cheating last week when they offer you a half off skinny latte for being such a loyal customer. With such efficient crowd control, specialized merchandize, and seemingly random loyalty generators, why would anyone go somewhere else?

Though that story is entirely fiction, the fact Philz likely knows how often you visit, how long you wait in line, what other coffeeshops or public locations you’ve been to in the last several months is not. Feel relieved you’ve never purchased from Philz? Well if you’ve walked by a store, they know that too.

Philz Coffee is one of many commercial businesses purchasing plug-and-play sensors from data analytics companies to track customers’ wifi-enabled smartphones and customer data for marketing and business analytics. This post will focus primarily on Euclid Analytics, though there are many others offering similar platforms like Turnstyle Solutions and iBeacon (see previous blog post). This post will examine the privacy implications specific to this use of geolocation data and explore the need for privacy frameworks in this type of consumer data collection.

How does Euclid’s technology work, and what data is collected?

Euclid Analytics offers a plug-and-play sensor, data processing, and analytics platform that allows businesses to purchase and install a small sensor and instantly begin collecting data from customers who are carrying a wifi-enabled mobile device. If the wi-fi capability on a device is turned on, but not necessarily connected to a hotspot, the Euclid sensors collect the location, manufacturer code[1], and MAC addresses which are scrambled using a one-way hashing algorithm and then transported using a Secure Sockets Layer (SSL) to be stored at Amazon Web Services (AWS).[2]  The one-way hashing encryption generates the same scrambled number for each MAC address, so repeat visits and cross-store triangulation can be tied to the same version, even if it is not formally linked to an individual identity. The sensor is able to track smartphones within 24,000 square feet[3] and can pinpoint a customer’s location within a 10-foot radius.[4]

The range and precision of Euclid devices may pick up on consumers who are never patrons of a Euclid-powered store, and perhaps even pick up signals from nearby patrons of other stores. As Euclid’s business base expands, they are able to match your location to not only multiple Philz Coffee locations, but any other store that utilizes Euclid technology. This technology is useful to determine not only where a customer has been, but how much time they spent in line, in the bathroom, or browsing a certain section.

Why could this information be sensitive?

Recently in London, a marketing firm announced the deployment of trash cans that track the unique hardware identifier of every Wi-Fi enabled smartphone that passes by.[5] Location data can reveal very private information and put user at physical risk. Identified by MAC address, the location data of the user can be determined exactly. While the Mobile Location Analytics (MLA) companies attempt to persuade the public that the transmitted data are all aggregated and can not be identified to an individual and device, the data itself reveals certain attributes of the customer. For example, a device that goes into women room most likely belongs to a female. Moreover, the combination of MAC address and any unencrypted traffic that may leak out can be a powerful database used for nefarious purposes.

Privacy policies? Notification? Opting out?

Last October, to eliminate the privacy concerns among public, Euclid announced the adoption of the MLA Code of Conduct[6], which is cooperatively drafted by seven location analytics companies, government officials and public policy advocates.[7] As a self-regulatory and enforceable standard, MLA Code requires the participating companies must[8]:

– Provide a detailed privacy notice on their website describing the information they collect.

– Promptly de-identify or de-personalize the MAC addresses they collect.

– Ensure that MLA data is not used for adverse purposes (like employment or healthcare treatment eligibility, for instance).

– Retain MLA data for a limited time only.

– Provide customers with the opportunity to opt out of mobile location tracking.

To analyze the capacity of MLA Code to support privacy, we refer to the privacy framework[9] and find while MLA Code initialized a good start, there are certain design flaws in the MLA Code can be improved.

First, the collection of location information is not clearly restricted to inappropriate context. Although the MLA Code requires the data must not be used for adverse purposes, the scope of adverse purposes are loosely defined as “employment eligibility, promotion, or retention; credit eligibility; health care treatment eligibility; and insurance eligibility, pricing, or terms.” Apparently, the scope is too narrow to consider the large potential misuse, and implies that as long as the MLA data are used for business analytics, the retailers are protected to access and use these data in any forms. While Euclid provides Euclid Express, which is a free service allows every individual user to easily install and access the data, Euclid does not specify how they ensure the usage is qualified as an appropriate context. We think the definition of context should be complemented with more specific variables, for example, certain physical places like bathrooms, hospitals, and hotels may be inappropriate contexts to retrieve customer’s data even for business analytics.

Second, the user-control and consent are vulnerable under MLA Code. Although the MLA Code requires the MLA companies and their clients to inform the customers about the collection and usage of MLA data, it sets an exception for the data are not unique to individual or device. Also, the MLA Code designed the consent to be operationalized as an opt-out, which in our inspection is a relatively difficult step for customer to take. Without the active notice by retailers, we assume most of the customers are not even aware of the MLA technology, and under this circumstance, how can we expect customers to actively take additional effort to submit their opt-out request? We went through the opt-out process of Euclid[10], and found out that after submitting, it takes a surprising long time of 7 days to successfully delete the applicant’s MAC address from their database.

Finally, the secondary use of the data are approved in the MLA Code. The fifth principle in MLA Code allows MLA Companies to provide MLA Data to unaffiliated third parties as long as the use of MLA Data by third party are consistent with the Principles of this Code. This principle permits the secondary use will not require user’ consent, and thus can put privacy in a vulnerable state.

Privacy frameworks and businesses

Identification of gaps in the current privacy framework is a topic of research in many organizations, especially the government. As per a study done by the US Government Accountability Office in September, 2013[11], there is no overarching federal privacy law for governing the collection and sale of personal information, including information resellers like Euclid Analytics. Instead there are a number of laws which are tailored for specific situations and purposes. An example cited is the Fair Credit Reporting Act which limits the use of personal information collected or used to help determine eligibility for such things as credit or employment but does not apply for marketing. There are some that apply to healthcare providers, financial institutions or the online collection of information about children.

Although private sector companies argue that the current statutory framework for privacy should suffice, the study found gaps did exist and the current framework did not incorporate the Fair Information Practice Principles.

A majority of the businesses are of the view that an overarching privacy framework would inhibit innovation and efficiency and at the same time reducing the consumer benefits such as relevant advertising and beneficial services. The private sector prefers self-regulation over additional legislation. As per them, additional regulation would be especially hard on small businesses and start-up companies as it would raise compliance costs and thus hinder innovation and economy in general.

We believe a comprehensive privacy framework for businesses which is ‘sector agnostic’ would be welcome. Lot of the self regulation of privately held businesses is arbitrary in nature and a good majority of them fail in providing adequate privacy protections. Given the plug and play nature of many of these MLA platforms, some businesses may not realize there are privacy implications involved with using this type of analytics. It is easy to put privacy best practices on the backburner when faced with business challenges. The argument about decreased commerce due to increased regulation is also suspect. While it is true that targeted marketing has increased conversion rates with regards to ‘potential buyers’ to ‘buyers’, privacy groups have argued that increased privacy protection has actually increased consumer participation. Encryption technologies that consumers know about promote the confidence to engage in transactions using these technologies.

Having said that, a careful study of the impact of such a regulation on businesses, especially small business, should be undertaken so as not to over regulate and limit the information available to them. A carefully prepared list of acceptable and restricted types of personal information that can be collected should be used and regulation should be applicable only for restricted personal information.

Recommendations for future action

Any future privacy framework has to be compliant with the Information Practice Principles.

-A person should be provided sufficient notice about the information practices before the collection of data. This would exclude companies such as Euclid from gathering data from a passerby who is completely unaware of the sensors and their purpose.

-Users should have easy access to their personal information. A mechanism has to be provided to monitor how the information is being used and contest any data that they think is erroneous or undesirable.

Due to relatively large number of companies engaged in retail analytics or information reselling, it is highly impractical for a person to track information across all these databases and exercise his or her powers. Therefore, a single system which is tied to an individual and houses the information collected off of an individual would be ideal. Any person who wishes to track his/her personal information can access the system, see what data has been collected, be it from a visit to Philz or the gym, and the use that it is being put to. Although it might require significant initial investment, such a system should address the privacy concerns and provide a complete picture of a person’s information footprint. If the businesses would like to avoid governmental regulation, the onus is on them to implement such a sector wide system. If the businesses are unwilling due to financial or administrative overhead, the government should step in.

By Sophia Lay, Elaine Sedenberg, and Rahul Verma


[1] Sadowski, Jathan. 2013. “In-store Tracking Companies Try to Self-regulate Privacy.” Slate, July 23. http://www.slate.com/blogs/future_tense/2013/07/23/ privacy_self_regulation_and_consumer_tracking_euclid_and_the_future_of_privacy.html.

[2] “Easy to Implement and Scale: Euclid Analytics Is the Easiest Way to Measure Visits, Engagement, and Loyalty to Your Store.” 2013. Euclid. http://euclidanalytics.com/product/technology/.

[3] Ibid

[4] Clifford, Stephanie, and Quentin Hardy. 2013. “Attention, Shoppers: Store Is Tracking Your Cell.” New York Times, July 14. http://www.nytimes.com/2013/07/15/business/attention-shopper-stores-are-tracking-your-cell.html?pagewanted=all&_r=0.

[5] Dan Goodin, 2013, “No, this isn’t a scene from Minority Report. This trash can is stalking you”,http://arstechnica.com/security/2013/08/no-this-isnt-a-scene-from-minority-report-this-trash-can-is-stalking-you/

[6] Future of Privacy Forum, 2013, “Mobile Location Analytics Code of Conduct”, http://www.futureofprivacy.org/wp-content/uploads/10.22.13-FINAL-MLA-Code.pdf

[7] “Senator Schumer and tech companies announce important agreement to protect consumer privacy” Euclid, 2013, http://blog.euclidelements.com/2013/10/senator-schumer-and-tech-companies.html

[8] “What is MLA Code of Condcut?”,2013, Future of Privacy Forum

[8]http://smartstoreprivacy.org/mobile-location-analytics-opt-out/about-mobile-location-analytics-technology/#whatis

[9] Nick Doty, Deirdre K. Mulligan and Eric Wilde. (2010). Privacy Issues of the W3C Geolocation API. UC Berkeley: School of Information. Report 2010-038.

[10] Opt-out, 2014, Euclid, https://signup.euclidelements.com/optout

[11] INFORMATION RESELLERS:Consumer Privacy Framework Needs to Reflect Changes in Technology and the Marketplace. GAO-13-663: Published: Sep 25, 2013. Publicly Released: Nov 15, 2013.

Minors & Their Social Media Blunders: Why Forgiveness is the Best Policy

While Privacy Rights for California Minors in the Digital World aims to empower teenagers with the knowledge and tools to control their online personas, the legislation is limited in that it provides no additional mechanisms for control beyond the delete buttons that the most popular social media pages already feature. The impossibility of legislating the comprehensive removal of online content posted by or about minors puts the onus on the institutions that deal with minors to take measures towards forgetfulness to avoid punishing individuals who made youthful indiscretions

Many recent examples of minors posting questionable content on social media sites have been well-publicized. In some cases, the consequences are immediate and severe. Last year, a Texas teenager’s admitted sarcastic joke about “shooting up a kindergarten” made on Facebook during an argument over a computer game have him facing ten years of jail time on federal terrorism charges. In a less extreme example, a teenager’s live-tweeting of unflattering comments about her fellow attendees at a college admissions session led the university to blacklist her application. Even high schools are getting into the business of surveilling the online lives of teenagers, with a growing number of school districts paying online firms to track student activity and flag content indicative of cyber-bullying, drug use, or other undesirable behavior.  In other cases, the consequences come long after content is posted, and the individual may in fact never find out that it was their childhood social media persona that cost them a job or employment opportunity. In a Kaplan phone survey, a third of college admissions officers said that they visit applicants’ social media pages to learn more about them, and nearly all of them had discovered information that negatively impacted an applicant’s chance of admission.  And, we can only imagine the election campaign dirt-digging that will occur when the first generation to grow up with social media starts running for office.

A number of complex issues are at play here.

First, do minors between ages 13 and 18 understand the ramifications of posting something online?  We can reasonably assume that most of them haven’t imagined the worst-case scenario that might follow posting embarrassing or sensitive content. This is not to say they are not managing their online reputations at all. A recent Pew Internet study makes it clear that many teens have deleted content that they’ve posted online, and nearly 20% surveyed claimed to regret having posted something online. However, once content is online, there is nothing stopping someone who comes across it from copying it, effectively removing it from the control of the original poster.

The Privacy Rights for California Minors in the Digital World legislation outlines the rights of minors to delete anything they themselves have posted, but because of the nature of digital creation, has limitations in terms of others’ postings that may still impact the online reputation of a user.

Internet users have limited control over the content they post, and almost no control over content posted about them by others. As Peter Fleischer explains in Foggy Thinking About the Right to Oblivion, the cry of “privacy” is often in direct conflict with the U.S. concept of freedom of speech. It would be dangerous to go down the road of allowing individuals to request information about them be removed just because it has something to do with them. The notion of creating a mechanism that allows content to auto-expire after a given amount of time also doesn’t solve the ownership problem: if auto-expiration was technically possible, content could still simply be copied by another user and the setting for auto-expiration removed.

The technical and financial obstacles of making certain content harder to find in search engines or suing anyone who owns or publishes content about you necessitate other solutions to this thorny problem. Minors are particularly vulnerable: being at the prime age for identity experimentation, they’re likely to misapprehend the potential of a single post to change their lives a few years down the road when they’re trying to get into college or apply for their first jobs. And there is a subset of teens that is even more vulnerable according to Nathan Jurgenson for the Snapchat blog:

“It is deeply important to recognize the harm that permanent media can bring—and that this harm is not evenly distributed. Those with non-normative identities or who are otherwise socially vulnerable have much more at stake being more likely to encounter the potential damages past data can cause by way of shaming and stigma. When social media companies make privacy mistakes it is often folks who are not straight, white, and male who pay the biggest price.”

While it’s conceivable that teens could afford online reputation management services at $20 for a year’s subscription, it remains to be seen whether these services are effective. Teens are probably not very likely to successfully get sensitive content about them hidden in search engine results.

We should expect institutions that have dealings with minors to use utmost restraint in gaining access to data about them, especially if it’s not immediately relevant to the relationship we have with those institutions.

Unsettling is also what “delete” actually means. Even if content looks to be removed from the public eye, it is likely stored on a server somewhere, perhaps still accessible to some employees of the social media platform. Legislation in Europe has mandated that it be possible to request all the data Facebook has on a particular individual, which has revealed a staggering quantity of information stored about users — for some, the process yields a pdf document of 800+ pages.

To complicate things further, it’s not unthinkable that even what a minor might maturely and thoughtfully decide not to post might somehow come back to haunt her. Facebook has admitted to storing data on what users type out but then opt to delete, known to Facebook employees as self-censoring. It’s challenging to come up with useful policies and practices if what’s actually being kept flouts all reasonable expectations about behavior — “if I never post it, it doesn’t exist, right?” — and isn’t disclosed in Facebook’s privacy policy.

These issues show that, well-intended legislation aside, it is impossible to provide social forgiveness for minors via solely technical channels. Data Prevention and the Panoptic Society: The Social Benefits of Forgetfulness presents compelling arguments for why forgetfulness is a society-wide, rather than just an individual, benefit. There is a strong parallel between the social benefits of forgetfulness for juvenile crimes, discussed in that paper, and juvenile social media mistakes. Without a doubt, minors are going to do things that seem stupid and that, if they were to reflect on years later, they would regret. Companies and colleges that indiscriminately scrutinize the social media profiles of prospective students/employees will find embarrassing material on nearly anyone with an active social media life. Removing them from consideration will mean that they are missing out on people with a lot of potential. Rather than performing this scrutiny within a policy vacuum, organizations should have guidelines, perhaps influenced by national recommendations, on what is the appropriate type of content and within what time range which is relevant for their decision-making, with an eye toward taking a compassionate and reasonable perspective on content posted by minors.

By Tiffany Barkley and Robyn Perry

The FTC vs. Unfair and Deceptive Practices in the Internet Era

On January 15th 2014, Apple settled a complaint with the FTC and agreed to return at least $32.5 million to consumers for in-app purchases that children made without a parent’s consent [1]. In-app purchase is one of the disruptive innovations introduced by Apple in their search for new revenue streams, and this particular feature allows users to make purchases related to content within the application. For instance, the popular application ‘Dragon Story’ allows users to buy ‘Gold’ within the game using real money. In order to initiate an in-app purchase users are required to enter their password to authenticate the transaction.

This issue before the FTC involved applications targeted towards children which enabled in-app purchases to occur without having to re-enter a password each time (Apple has a practice of caching the passwords for 15 minutes). The result is that following an initial password being entered, children were effectively able to continue using real money for in-app purchases without parental consent. Furthermore, applications identified as suitable for children had in-app purchases designed in such a way that it would be difficult for them to assess whether they were spending real or virtual money. The FTC  argued that this practice was a ‘material’ misrepresentation causing injury, that the injury was substantial and not outweighed by any countervailing benefits to consumers or competition. Moreover it was an injury that consumers themselves could not reasonably have avoided and hence unfair [2]. In one event, a child spent over $2,600 dollars on ‘Tap Pet Hotel’, while there was no way a parent could have known those charges could occur [1].

These facts lead one to ask: How did this situation arise? Apple is reputed to be a customer friendly company, and even, admittedly, had full knowledge of all of the features that drew the complaint from the FTC ( all such applications are examined and approved for their content before entering the apple store). So what accounts for such a widespread, unfair, practice occurring for such an extended period of time without interruption?

In our opinion, in-app purchases are just one technological innovation to come along and show that the FTC’s enforcement mechanism is unable to keep up with the pace of technology in the internet era [3].

This week’s readings cover several examples illustrating the incredible lag between an illegal practice and the FTC’s ability to force an end to such practices. (As a procedural note, one should know that when the FTC pursues organizations that violate its policies on ‘deceptive’ or ‘unfair’ practices, a ‘complaint’ is issued by the FTC commission detailing what violations have occurred.)

In its complaint issued against Facebook in July of 2012, the commissioners highlight eight different counts of violation, which include misleading its users on the terms of its privacy policy, falsely representing site features, providing information to third party advertisers despite pledges to the contrary, and verifying facebook applications as secure despite having not performed any such verification. In its complaint against Google’s ‘Buzz’ product in 2010, the FTC cites the company for using registered Gmail users’ information to populate this new social networking service without seeking consumers’ consent to use their information. But in each of these cases, the FTC’s complaint was issued long after the violations had occurred – for instance, in the Facebook complaint, the FTC mentions violations that began as early as 2007.

This delay seems partly due to the rules which direct the FTC’s investigation and enforcement actions. For instance, even after all of the proper steps have been taken – including an investigation, and a consensus by the commissioners that wrongdoing has occurred, the commission’s findings only become binding 60 days after their issuance – that is, unless that order is stayed by a court.  Furthermore, even if a respondent ignores an FTC commission order, they aren’t even criminally liable. As the FTC states on its website: “ If a respondent violates a final order, it is liable for a civil penalty of up to $16,000 for each violation. The penalty is assessed by a district court in a suit brought to enforce the Commission’s order.” Even in a situation of clear wrongdoing in which the FTC has ruled, a court must still intervene to  provide injunctive relief and acquire damages (keep in mind that each of these actions will take days and possibly years). And what is $16,000 dollars in an era of multi-billion dollar companies, where a day can drive millions of dollars in online purchases?

Setup in 1914, the FTC’s role was to prevent unfair methods of competition in commerce as part of the battle to “bust the trusts.” To protect the consumers from unfair practices and deception, it has been trying to write rules telling businesses how to deal with their customers on an industry-by-industry basis. But, with continuously evolving technologies such as mobile technology, online surveillance systems, automated healthcare systems, networked and connected cars, houses, and locks, come new potential threats to privacy and security of consumers. And today new threats emerge faster than ever, and hence pose a great challenge for the FTC to be able to effectively regulate these industries a timely manner.

Even policies that apply in the case of online services such as Facebook and Google may not be relevant for connected devices and the internet of things. Furthermore, it seems that the delays that have been built into the FTC’s enforcement actions mean even if new rules could be conceived of in time, they would still take months to take effect [3]. Months may not have mattered in the era of television, but in the era of click-commerce, such a delay makes current forms of FTC policing completely ineffective.

As new, connected, technologies integrate more and more into our daily lives, there is an increased threat of attack on consumers, especially in the area of privacy. These threats deserve to be  faced with the same rigor and urgency with which the government tackles them in the physical realm. In a technological society, justice must be imbued with the ability to keep pace with new marketplaces and the threats they pose to consumers.

Can the FTC, as it currently exists, possibly keep up with the pace of new kinds of deceptive practices in evolving technologies? Can its enforcement and investigative framework be updated to respond swiftly to illegal practices? Or does another body, operating under a different framework, need to be conceived of to face these challenges? Such are the questions we will need to answer if our society is to be able to buy, trade, and live safely in the internet era.

[1] http://www.theverge.com/2014/1/15/5311364/apple-settles-with-ftc-over-in-app-purchases-tim-cook-says

[2] http://www.ftc.gov/sites/default/files/documents/cases/140115applecmpt.pdf

[3] http://www.ftc.gov/about-ftc/what-we-do/enforcement-authority

Readings:

http://blogs.ischool.berkeley.edu/i205s11/files/2011/04/FTC-POLICY-STATEMENT-ON-UNFAIRNESS-1.pdf

http://blogs.ischool.berkeley.edu/i205s11/files/2011/04/FTC-POLICY-STATEMENT-ON-DECEPTION-1.pdf

http://www.ftc.gov/sites/default/files/documents/cases/2012/08/120810facebookcmpt.pdf

http://www.ftc.gov/sites/default/files/documents/cases/2011/03/110330googlebuzzagreeorder.pdf

http://www.ftc.gov/sites/default/files/documents/cases/2011/03/110330googlebuzzcmpt.pdf

http://www.ftc.gov/sites/default/files/documents/cases/2011/03/110330googlebuzzexhibit.pdf

 

Interconnectivity and Lack of Transparency

iBeacon is a new location-based technology that Apple recently announced on iOS. Instead of relying on GPS data, iBeacon uses a Bluetooth low energy signal to identify nearby iPhone, iPad and iPod Touch devices with a high degree of precision [1]. Several retail stores, including Apple and Macy’s, have started using the technology to alert customers of special deals related to merchandise in their vicinity through beacons located throughout the store [2]. This new way to track iPhone users inside buildings raises some serious concerns about consumer privacy, and how well users understand the possible ramifications of setting this micro-location tracker to “on” or “off”. Released last February, the exposition of such Consumer Privacy Bill of Rights principles as Individual Control, Transparency, Respect for Context, and Focused Collection, not only dusts off familiar concerns about how, when, and to what extent companies like Apple, Facebook, and Google might collect personal data, but it also, most remarkably for our purposes, raises questions about the roles, rights, and responsibilities that third-party companies have in the use of consumer personal data.

In the case of iBeacon, Apple clearly states that allowing third-party apps and websites to access your device’s location puts you under the third party’s privacy policy and terms of use, rendering Apple not responsible or liable for how data is collected or shared. But often, apps connect their data to other third-party apps, making it even more difficult for consumers to understand what kind of data they are sharing, and with whom. The privacy policy for a shopping assistant app called inMarket, for example, warns users that third parties may collect many types of data about them by downloading the app: “They may use Tracking Technologies in connection with our Service, which may include the collection of information about your online activities over time and across third party sites or services. Their privacy policies, not ours, govern their practices.” [3]. The assiduous inMarket user might notice that the company considers Unique Device Identifiers as “non-personally identifying information” and asserts that “If we de-identify data about you, it is not treated as Personal Information by us, and we may share it with others freely.” While inMarket may distinguish between personal and non-personal data in its privacy policy, it undermines the Transparency Consumer Privacy Bill of Rights principle by making this information nearly impossible to find (especially on the small screen of a mobile device), and so vague that users are left with a less than meaningful understanding of the privacy risks and options associated with their use of the product.

Issues of companies leaving their users in the dark about what data collection contexts they are in were raised again in late 2013, when Google came out with a new feature for their advertising system called “Shared Endorsements,” which could put users’ names, photos, and Google+ profile next to advertisements for companies that they have endorsed, for each user’s social network to see. [4]  Such a personal recommendation system for advertising raises several questions about the Consumer Bill of Rights, namely its Individual Control, Respect for Context, Transparency, and Focused Collection principles. While Google has apparently taken a cue from Facebook—who at the time of the Shared Endorsements rollout, had just been sued for $20 million for not allowing users to opt out of their similar system, “sponsored stories”—and given users an option to opt out of the service, the implications for Google’s implementation of personal recommendation advertising differ from other social networks. As one news source notes, for example, the victories for consumer privacy gained by the Shared Endorsements opt-out option “may not extend far beyond that,” as, technically, users are opting out of their photo being shown in advertisements, not necessarily in other Google products. [5] In fact, it is due to the diversity, interconnectedness, and sheer number of Google products, that the company is in more in a position than Facebook, for example, to redefine and potentially violate the Consumer Privacy Bill of Rights’ principles of Respect For Context and Focused Collection principles, which qualify the appropriate context and amount of data users can expect companies to collect, use, and disclose.

References:

[1] http://support.apple.com/kb/HT6048
[2]http://wallstcheatsheet.com/stocks/is-apples-ibeacon-technology-a-threat-to-consumer-privacy.html/?a=viewall
[3] http://www.inmarket.com/privacy_policy.html
[4]http://www.businessweek.com/articles/2013-10-11/google-plans-to-include-users-faces-in-ads
[5]http://abcnews.go.com/Technology/google-privacy-terms-conditions-opt/story?id=20563128
[6]http://www.dailydot.com/lifestyle/google-new-ads-profile-face-opt-out/

An Observation of the Observers: Unmanned Aircraft and Privacy

Last November Senator Ed Markey of Massachusetts introduced the Drone Aircraft Privacy and Transparency Act, which serves to amend the Federal Aviation Administration’s Modernization and Reform Act of 2012. The previous act supplied a set of parameters for the operation of unmanned aircraft in US Airspace; however, in light of the US airspace opening to commercial unmanned aircraft systems (UAS) in 2015, the amendment would establish additional federal standards to preserve individual privacy and keep the public informed on the development of UAS. [1][10] Among the stipulations proposed in the Drone Aircraft Privacy and Transparency Act are explicit disclosure of data collection practices, requirements for the public accessibility of UAS operation information, procedure for obtaining warrants for search with concurrent prohibitions on the sharing of information, and regulatory oversight from the appropriate branches of the federal government. The legislation submitted by Senator Markey serves to promote greater transparency of conduct in unmanned aircraft surveillance, but the actual privacy rule adopted by the FAA is less directly engaged with privacy issues.

At present, the Federal Aviation Administration (FAA) entrusts the enforcement of privacy policies to the operators at UAS test sites. [2] The agency contends that managing extensive privacy protections are out of its regulatory purview, and given the many nuanced–and at times contradictory–concepts of privacy, the agency’s decision to abstain from unilaterally overseeing delineations in privacy necessitates clearer boundaries on the notion. Daniel Solove discusses some of these myriad interpretations of privacy that have been adopted and debated. Solove’s “Conceptualizing Privacy” reveals a web of interrelated constructs such as confidentiality, accessibility, and secrecy. [3] Florida v. Riley 488 U.S. 445 (1989), cited by Solove and directly relevant to the UAS secrecy issues, displayed the court’s ill-defined notion of infringement.[4] In the ruling, the Court held surveillance from the air was not subject to Fourth Amendment protections as navigable airspace is considered a public vantage point. Over two decades later, the sophistication of observation technology and the ability of craft to stay aloft for 16 – 24 hours weakens the Court’s assertion of contextual equivalence.[5]

Unmanned aerial vehicles do not clearly convey of choice according to Nissenbaum’s premise of “informed consent”, whereupon detailed information about surveillance activities will not be served to the public via an FAA database of test site operators. [6][2] This is also evident in the FAA’s current policies, where a) users are not directly conveyed the meaning of privacy requirements and (b) the effect of existing social roles are not part of the overarching contextual inquiry. For domestic drones, the constraints on streams of information are a combined function of technical possibility and the various information flows in different realms (sensor data, cloud computing, archived data, etc.). Another aspect with automated data collection revolves around data privacy because of temporal nature of data. For example, if the real time data from is archived, it can be used for discriminatory targeting especially if the data collected from intelligent drones are combined with advanced correlatives and are fed to a data mining platform. [11] Circumspection and cooperation on part of the users might be useful values when they are integrated in an umbrella data privacy framework. For example, as the Court observed in Katz v. United States, 389 U.S. 347, 351 (1967): “What a person knowingly exposes to the public, even in his own home or office, is not a subject of Fourth Amendment protection”.[3] These observations must be re-evaluated in new light of changing social norms and pervasive technology environment  to codify effective privacy laws for surveillance state.

Given the complexity of the aforementioned privacy concerns and next year’s advancement of commercial unmanned aircraft systems in US, it is vital that an approach to surveillance be tailored to monitor the activities of drones and drone operators. The far-reaching extent of the current capabilities of the medium must be taken into account. The core principles of privacy could be addressed partially by applying the FTC Fair Information Practice Principles (FIPP) [7] guidelines, in particular the aspects on Notice/Awareness, Choice/Consent, and Access/Participation. Recent FAA actions such as positioning UAS test sites as public institutions are foundational activities towards potent privacy laws.[8] Finally, the Drone Aircraft Privacy and Transparency Act introduced by Senator Markey refers to the OECD’s Guidelines on the protection of privacy and Transborder Flows of Personal Data, citing the principles listed in Part Two of the guidelines, which include principles on the national application of collection, quality, purpose specification, and use limitation of personal data. [9] Recommended practices as enumerated in FIPPs and OECD are bolstered when adhered into legislation, and it is in the interest of government to address privacy compromising practices with rigorous regulation and oversight.

 

References:
1] http://www.markey.senate.gov/documents/MRW13745.pdf
[2] http://www.faa.gov/about/initiatives/uas/media/UAS_privacy_requirements.pdf
[3] Daniel J. Solove, Conceptualizing Privacy, California Law Review, Vol. 90
[4] http://scholar.google.com/scholar_case?case=15702097135289839333&hl=en&as_sdt=6&as_vis=1&oi=scholarr
[5] https://www.eff.org/deeplinks/2012/12/newly-released-drone-records-reveal-extensive-military-flights-us
[6] Helen Nissenbaum, “A Contextual Approach to Privacy Online,” Daedalus140 (4), Fall 2011
[7] http://www.dhs.gov/xlibrary/assets/privacy/privacy_policyguide_2008-01.pdf (DHS adoption)
[8] http://www.faa.gov/news/updates/?newsId=75399
[9]http://www.oecd.org/internet/interneteconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm
[10] https://www.eff.org/deeplinks/2013/12/faa-creates-thin-privacy-guidelines-nations-first-domestic-drone-test-sites
[11]https://www.aclu.org/files/assets/protectingprivacyfromaerialsurveillance.pdf

First Amendment Rights and the Ban on FCC’s Net Neutrality

With the public’s increasing reliance on the Internet as a communication channel, there has long been debate on whether the government should treat the Internet Service Providers as common carriers such as telecommunications services. This would treat the Internet as a public utility and thus have it be regulated by the FCC to ensure fair pricing and access. In 2010, FCC adopted a rule that required ISPs to treat all Internet traffic equally, also known as “net neutrality”. However, on January 14, the federal court shot down the FCC’s net neutrality rules. The basis for this decision is that the FCC cannot regulate ISPs as common carriers, because the FCC does not currently classify ISPs as such. This ruling furthers the prevailing view on treating ISPs as private entities. But if ISPs are considered private businesses, how do First Amendment free expression rights come into play on the Internet? Do ISPs, as private businesses offering a paid service, have the right to suppress free speech or block specific content and sites to users?

Similar questions about free speech on private property have come up in the past in cases like Lloyd v Tanner. In Lloyd v Tanner, the Court ruled that private entities are permitted to regulate speech on their own private property as long as it is not providing a public service that a government would have otherwise provided. Here the Court rules in favor of protecting private property rights in the Fifth Amendment. In such cases where expressive conduct is occurring on private property that carries a public nature (like shopping plazas), the Court is always treading the middle ground to balance First Amendment rights and Fifth Amendment rights. The test to apply from Lloyd v Tanner in this case is to see whether or not ISPs provide a service to the public for general purposes. The tricky issue here is that ISPs provide a wide variety of content to their users, including both private and public services. E-commerce websites provide a service that is very similar to a shopping mall. The site has a special designated purpose to invite users to come and purchase their products. On the other hand, governmental websites provide important public services such as submitting tax forms online and signing up for healthcare plans. If ISPs chose to slow down the speed of traffic for these sites or block its contents, this would provide a great hindrance of the government’s function. Because ISPs provide access to a variety of content that is both private and public and with the public’s increasing reliance of the Internet as a communication channel,  ISPs should be classified as a telecommunications service and regulated by the FCC to ensure open access and fair prices. According to the Lloyd v Tanner case, ISPs do provide a significant public service to be classified as common carrier and be required to uphold First Amendment rights.

In Verizon’s appellate brief, it argues that the Open Internet Order deprives broadband network owners like Verizon of their First Amendment rights “by stripping them of control over the transmission of speech on their networks.” In this latest ruling, however, the Court did not rule on Verizon’s First Amendment challenge because the disposition of the case (based on FCC’s lack of standing to impose net neutrality policies on information service providers) rendered it unnecessary. Therefore, it remains an open academic question as to whether information service providers like Verizon have the First Amendment right to promote and publish content and to exercise editorial discretion, and whether authorities like the FCC meet the tests to restrict that right should it choose to. In its appellate brief, Verizon maintains that in restricting broadband network providers’ manner in transmission of speech, the Open Internet Order must be subjected to the O’Brien test; that is, the restriction must be content-neutral, related to a substantial government interest, and narrowly tailored. Verizon argues that the FCC fails to meet its burden. In its respondent’s brief, the FCC argues that broadband providers are not entitled to First Amendment protections because broadband providers are merely “conduit of speech” for others.

Sources:

http://online.wsj.com/news/articles/SB10001424052702304049704579320500441593462?mg=reno64-wsj&url=http%3A%2F%2Fonline.wsj.com%2Farticle%2FSB10001424052702304049704579320500441593462.html

http://www.latimes.com/business/la-fi-fcc-net-neutrality-20140115,0,7350475.story#axzz2qTjbZ4hy

http://en.wikipedia.org/wiki/Net_neutrality_in_the_United_States

 
 

23andMe and First Amendment

23andMe is a personal genetics company that has been providing customers with information about ancestry and inherited traits based on their DNA samples since 2006. They discontinued part of the their service as of November 22, 2013 after receiving a warning letter for the US Food and Drug Administration. In this letter [1], FDA stated that Personal Genome Service they offer is considered a device, and they feel concerned that the use of 23andMe analysis results may put the customers at risk. FDA ordered 23andMe to “discontinue marketing the PGS” and inform FDA of their action plan. Two weeks after receiving the warning letter, 23andMe released a statement [2] on their website announcing that they will not provide health related results to their new customers who purchased their kits on or after the date they received the FDA warning letter.

There are a few interesting aspects of this situation that could be addressed in a court case. The first is considering if the data 23andMe collected is protected under the First Amendment. The second is investigating the FDA’s attempts to regulate indirect information. And third is the question of whether there is compelling data to support the FDA’s claim that people “may begin to self-manage their treatments through dose changes or even abandon certain therapies depending on the outcome of the assessment.” [1]

In relation to the First Amendment question, Sorrell v. IMS Health (2011) [4] has several similarities to the 23andMe situation. Sorrell ruled that after data had been legally collected by a private entity (prescriptive data collected by Pharmacies in the Sorrell case) the data was then protected speech under the First Amendment. Therefore the private entity had the right to communicate that data to its customers unless the Government was able to prove that it had compelling interest to stop that communication. It is important to point out that the burden is on the Government to prove that there is a compelling reason for the private entities right to free speech to be restricted. The similarities are striking between the Sorrell case and 23andMe’s situation. 23andMe legally collects and analyzes genetic data. Once the data is compiled it should be considered speech and protected by the First Amendment.

In relation to the FDA’s attempts to regulate indirect information, the court case United States v. Caronia (2012) [4] recently ruled on a related situation. They found that a pharmaceutical salesman was allowed to discuss off-label uses of drugs he was selling with potential clients, which was against FDA regulations at the time, because his speech about the alternate uses of the drugs was protected by the First Amendment. The court found that the communication of this medical information that was not directly approved by the FDA was still protected speech. We can apply the same logic to the medical information being provided by 23andMe to its customers.

Based on first amendment, government may only regulate speech “when it has a compelling reason and does so in a content-neutral way.” In the case of 23andMe, Green and Farahany [5] argue that “the FDA’s precautionary approach may pose a greater threat to consumer health than the harms that it seeks to prevent.”  After reviewing several surveys done on people who received similar health related data, they found that only an insignificant fraction of the participants engaged in potentially dangerous behaviors such as consuming prescription drugs without medical consultation after receiving this information. Also, they did not find any remarkable increase in participants’ level of distress or anxiety. These findings suggest that FDA does not have sufficient data to support its claim about the potential risk of patients misusing their health data.

[1] http://www.fda.gov/ICECI/EnforcementActions/WarningLetters/2013/ucm376296.htm

[2] http://mediacenter.23andme.com/press-releases/23andme-inc-provides-update-on-fda-regulatory-review/

[3] http://www.supremecourt.gov/opinions/10pdf/10-779.pdf

[4] http://www.reedsmith.com/files/uploads/DrugDeviceLawBlog/Caronia.pdf

[5] http://www.nature.com/news/regulation-the-fda-is-overcautious-on-consumer-genomics-1.14527

Snowden, Journalists and the First Amendment

The 2013 disclosure of secret NSA documents to the media by Edward Snowden has renewed questions about First Amendment protections for reporting on leaked information. As explained in the Huffington Post, journalists are protected when publishing stories based on leaked documents, but the source of the leak is not. For someone like Snowden, who illegally obtained and shared classified information with the press, anonymity is often the best form of protection. The statutes and court decisions that protect journalists’ confidential sources say that investigators must “exhaust all alternative means” for finding a leaker before forcing a journalist to name a source. Journalists are also (generally) ethically obligated to protect their sources, and thereby would have to go against the investigator’s wishes. So those statutes make sense, in saving investigators and journalists some time and hassle. However, since Snowden is not an anonymous source, that aspect doesn’t quite apply, but journalists are still protected in publishing the leaked material.

One of the landmark cases protecting the freedom of the press to publish classified information is New York Times Co. v. United States (1971). President Richard Nixon had exercised executive authority to stop the New York Times from printing the classified Pentagon Papers, and the question came to the court of whether or not the government’s interest in protecting the classified information superseded First Amendment protections of the free press. The court ruled in favor of the New York Times, though there was not consensus among the justices on the reasoning behind it, and many concurring and dissenting opinions were published. Some justices felt that the case was decidable from the absolute superiority of the First Amendment to government interests of security, while others felt that “an enlightened citizenry” is the only balancing power against relatively unchecked executive powers.

While the New York Times v. United States decision weighed First Amendment protections for the freedom of the press against national security concerns, it did not delve into how the information in question was obtained by the press. In Bartnicki v. Vopper (2001), the question arose whether First Amendment protection is afforded to speech that discloses the contents of an illegally intercepted communication. In that case, a third party intercepted a cell phone conversation between the president of a teachers union and Bartnicki, a negotiator for that union. A recording of the conversation was delivered to local radio stations, and was played on air by Vopper, a commentator critical of the union negotiations. The court decided that First Amendment protection did extend to Vopper’s playing of the recording even though it was illegally obtained by a third party, concluding that “a stranger’s illegal conduct does not suffice to remove the First Amendment shield from speech about a matter of public concern.” The fact that the disclosed information was of public interest was critical in both the New York Times case and the Bartnicki case.

Even with this case history protecting journalist disclosure of classified information, there are always concerns about whether the current administration will try to abridge freedom of the press in some way. According to the First Amendment Center, President Obama issued a statement saying that journalists should not be prosecuted for “doing their job,” and that questions about the balance between protecting classified material and ensuring a free press are entirely appropriate. This was in response to questions raised about whether it’s appropriate for a journalist who publishes leaked information to be “treated as a potential criminal.” The question was related to two cases; one where the Associated Press had its phone records secretly subpoenaed, and another where prosecutors obtained a search warrant for the private emails of Rosen, a Fox News reporter as they were looking to identify his source in a story on North Korea. In the Rosen case, an official was indicted for revealing classified information, but Rosen was not charged with anything. While these incidents are troubling, perhaps they should be expected, as it seems that the debate between the freedom of the press and the secrecy of national security information is still being argued.