Apple Vs Facebook: Who’s Right is Your Data?

Apple Vs Facebook: Who’s Right is Your Data?
by Dan Ortiz | March 12, 2021


Photo by Markus Spiske from Pexels

Apple and Facebook are squaring off in the public domain over user privacy. In iOS 14.5, across all devices, app tracking features will transition from opt-out to opt-in and developers will be required to provide a justification for the tracking request in regards to third party tracking (App Tracking Transparency, User Privacy, App Privacy). As much as we are concerned that an app may spy on us through our camera, or sell our location data, this permission is to mitigate concerns an app is following us throughout our digital experience and logging interactions we have with other apps. Apple’s goal is to better inform users on the information each app is collecting and provide its users with more control over their data(. It is not to end user tracking or end personalized advertisements, but to increase transparency and get users consent prior to doing so. For people who prefer highly targeted ads, they accept the tracking request. For those who find it creepy and swear Facebook is listening in on their conversations, they can deny the request. Everyone gets what they want.

In response to the upcoming iOS updates, Facebook launched a very loud, very public campaign against the new policies claiming it will financially damage small businesses by limiting the effectiveness of personalized advertisements. At the core of this disagreement is who owns the data. Facebook phrases it like this “Apple’s policy could limit your ability to use your own data to show personalized ads to people who are likely to be interested in your business”. Clearly, Apple views the control of personal data as the right of the individual user, and Facebook believes they control that right.

Facebook’s argument claims that giving users the ability to say no to cross application tracking will hurt small businesses ability to serve personalized ads to potential customers, thus increasing the small business marketing costs. Facebook has taken out full page ads and have launched a campaign (Speak Up For Small Business). Even though iOS is only 17% of the global market, it has roughly 33% of the US population and average income for an iOS user tends to be 40% higher than an Android user. iOS users are a significant market in the USA and control a significant amount of its disposable income.


Photo by Anton from Pexels

However, Facebook’s argument, excluding concerns on how they calculated impact to small business, is disingenuous. Their campaign portrays this update to iOS as the death of personalized ads and the death of the small business. In reality, small businesses can still target advertisements in all data that has been uploaded to Facebook from our phones directly (first party). Small businesses can still use information about us, our home town, our interests, our groups all associated with our Facebook profile. What is changing is Facebook’s ability to track iOS users across multiple applications and browsers on the device itself. It is disingenuous to claim that user generated data, on applications not owned by Facebook, is the property of another unrelated small business.

The landscape of privacy in the digital age is shifting. Apple’s policy of championing individual choice when it comes to sharing personal data, although still notify and consent, is a step in the right direction. It informs users and asks for consent directly, rather than burying it in a long user agreement document. This aligns with the GDPR requirements for lawful consent requests(source). The collection and misuse of user data is a growing concern and continues to be a topic of increased debate. Landmark legislation like CalOPPA and GDPR are increasingly redefining privacy rights of the individual. Instead of embracing these changing landscapes, Facebook chose to stand in opposition of Apple’s app-tracking feature instead of convincing us, the users, why we should allow Facebook to track us across the internet.

This conflict has exposed the real questions consumers will face when iOS is updated. When the request to track pops up as the user launches the Facebook app launches, what will they do? Will they allow tracking and vindicate Facebook’s position, or will they deny the request challenging Facebook’s current business model of tracking people all across the web?

Better yet, what decision will you make?

Google, the Insidious Steward of Memories

Google, the Insidious Steward of Memories
by Laura Treider | March 12, 2021

All those cookies are bad for you
If you are a regular reader of this blog or if you take an interest in the use of data in our modern economy, you are aware that most companies you interact with digitally online or via apps on your phone track as much of your activity as possible. These companies frequently sell your data to brokers, or, while not technically selling it, work with advertisers to monetize your data by tailoring ads to serve you. Proponents of this scheme argue it’s good for consumers and everyone is happier seeing ads tailored for their interests. Privacy advocates are more skeptical and argue you should delete your data to avoid staying in a filter bubble and to prevent any other uses of your data that might not be to your benefit. Choosing to err on the side of privacy, I decided to see what data Google has about me and see how easy it is to delete it.

How to view and download your data
If you’re logged in to a chrome browser, seeing what data Google has about you is fairly simple; the details of how to accomplish it are here. Google has a data and personalization landing page that lets you view your tracked activity and control “personalization settings”, Google’s euphemism for how much they are tracking your activity. From this landing page, I was able to click through to a link to Google Takeout, their cheekily named platform for downloading all the information linked to your account. I chose to download everything possible. It took 10 hours before my download was ready and I received an email with 23 links. 22 of them were 2GB zipped files and a 6GB .mbox mail file containing emails since 2006. (I’m not an email deleter.)

What data does Google have about me?
While I was going through the downloads, I became overwhelmed by the sheer number of folders relating to different products, 40 in all. After I unzipped the 22 files and put them together on my hard drive, I found that Google includes an archive_browser.html file that helps you navigate the file structure. Google isn’t just holding data about my web browsing activity and search history. I was surprised to learn that I had more than 25,000 photos uploaded to Google’s servers. These weren’t from my android phone, either. They were from my camera. At some point in my data life, I must have chosen to upload 25,000 photos to the cloud, but I don’t remember having done that. There were also 48 videos included, the entirety of my YouTube channel I had set up 15 years ago while living in England and sharing memories of my newborn son with family in the US.

Interestingly, not included in my Takeout was my Ad Settings. I had to navigate to those via the online “Data & personalization” hub. I was able to see all the things that Google thinks I’m interested in. Some of them made sense: “Dogs,” “TV Comedies,” “Computers and Electronics,” and “Cooking and Recipes.” Others were a little more perplexing: “Nagoya,” “Legal Education,” and “Parental Status: Not A Parent.” (Sorry, son, I must not Google how to parent you enough.) As an aside, a great ice breaker for virtual dates during this pandemic would be to share each other’s ad personalization data with each other.

In addition to the files I had given Google and the ad settings Google was predicting for me, Google also has 96 MB of location data for me, 12 MB of search history spanning the last 18 months, and another 11 MB of browsing history spanning the past 5 months. Here’s where I got distracted: Google gives you your location data in the form of a JSON file. If you want to turn that JSON file into something viewable in map form and you’re programmatically inclined, I recommend this GitHub repo.

The Google location data rabbit hole
But Google’s online viewer for showing you your location history is engrossing. You land on a map with a bunch of markers representing places you have visited and lived at with some fun badges to encourage you to explore.

I looked at my data from Boise, the previous city I lived in, and randomly clicked dots. Each location came with a flood of memories about when I had gone there and why.

I went back in time to when I knew I had gone on a vacation to Germany. Clicking through the days was like looking at a scrapbook Google assembled for me. One of the days is shown below. I had traveled from our hotel in Munich to the Neuschwanstein and Hohenschwangau castles. My travel path was there, complete with the stop I had made in Trauchgau to air up a leaky tire. Not only was my route there, I could dig through the photos I took on my mobile phone at each stop. What a gift!

The verdict:
According to a recent survey by the US census, only 89% of households has a computer. That’s potentially more than 30 million people who may have smart phones but no computer. So not all users of Google products have the luxury of data Takeout from cloud storage to their home storage. These people will be less likely to delete their data because there’s nowhere else for it to live. By positioning themselves as a simple and generous cloud storage provider for the average citizen, Google has trained us to let them be our data caretakers.
Armed with the knowledge of the extent of the data Google has about me, I was ready to decide whether to wipe the slate clean and remove my digital traces from their servers. Reader, I was too weak. I couldn’t do it. Whoever designed the interface for viewing your Google data online knows exactly what strings to pull to make my nostalgia kick in and want to save this treasure trove of memories. Google doesn’t just know everything about my digital life. It knows where I go and when. I am trussed up like a ham and served to advertising partners, and I go to the feast willingly.

More reading:
https://www.fastcompany.com/90310803/here-are-the-data-brokers-quietly-buying-and-selling-your-personal-information
https://www.eff.org/deeplinks/2020/03/Google-says-it-doesnt-sell-your-data-heres-how-company-shares-monetizes-and
How to see your Google data: https://support.google.com/accounts/answer/162744?hl=en

Apple’s Privacy War: Is it Good or Bad? You decide.

Apple’s Privacy War: Is it Good or Bad? You decide.
by Darian Worley | March 5, 2021

In the ongoing battle to provide more tracking and tools to identify consumers buying habits, Apple has decided to take a different approach and limit the ability of many companies to track your data without your permission. In this multi-billion dollar a year industry, Apple has indicated that it would release what it calls App Tracking Transparency (ATT) across iOS, iPadOS, and TVOS. This feature is expected to launch early spring 2021 to combat the digital surveillance economy.

How do advertisers track me anyway?

People go about their lives every day without realizing just how much data internet giants have collected. When iPhone users use an app to look at the weather, a Facebook post or another app on your iPhone, advertisers use an identifier called Identifier for Advertisers (IDFA) to track the user’s online behavior across multiple apps. This IDFA is a random device identifier assigned by Apple to a user’s device. It is used by companies to determine your location, what websites you visit, and other pertinent info without obtaining access to a user’s personal information. Companies use this information to sell marketing adds to individuals they are able to track, thus monetizing the data that they collected based on your own individual habits. Interestingly, Apple created the IDFA as a result of being sued for sharing user information without limitation via the UDID (Unique Device Identifier).

Why do I need ATT if Apple already has App Privacy Labels in the App Store?
Currently, Apple has what it calls Privacy Nutrition Labels in the Apple App store. These nutrition labels give iPhone users a snippet of what data apps collect and how they use this data. However, these privacy labels are currently based on self-reporting by app developers. There’s no verification by Apple or by any other source to determine whether or not an app is falsely using your data. Users should use caution when reviewing these labels as they may not be able to trust what apple and the privacy label says in the app store. Many apps in the app store that say that they are not sharing your data, but they could be.

Aren’t There Privacy Frameworks and Privacy Laws to Protect Me?
Many users are concerned about their privacy. Privacy frameworks and privacy laws such as The Belmont principles, CALOPPA, CCPA and the FTC were enacted to protect an individualÕs rights. While these privacy frameworks and laws focus on many privacy areas, two core tenants are choice for consumers and greater transparency. Due to the explosion of big data and online apps, many app developers and internet companies have skirted many of these laws and frameworks. In a limited unscientific study where users were specifically asked to read a privacy policy for a specific company, users indicated that the privacy policy is “too long”, and they said that they “assume that the privacy policy has good intentions.”

Potential Negative Implications of Apple’s ATT

In the tech giant war, Facebook took out a full-page ad indicating that Apple’s ATT would harm businesses. In the article “New Apple Privacy Features Will Be Hard on Small Businesses: Curtailing the collection of user data” may mean big spending for small developers, the author does not share any data on how small companies will be impacted other than stating that small business other than Facebook and Google have smaller budgets and they need to gather information to target their users. Further research did not yield any additional insights regarding how small firms would be hurt. The bigger story here is that Facebook has taken a stand against this privacy policy since Facebook stands to lose millions due its billion dollar digital advertising revenue stream.

In summary
We’ve been told that to get better services from internet companies, we need to give up more of our data. While this may be true, consumers should have the right to choose. While one can’t be sure of Apple’s motives to limit user tracking on the iPhone, it is already yielding tangible results since LinkedIn and Google indicated that they would stop collecting IDFA data on iOS. This seems to be a welcomed approach to the wild, wild west of collecting, using and monetizing one’s data without permission. Apple’s policy seems to strike the right balance between giving users choice to determine how their data is used by individual apps. Ultimately, as a user of the iPhone, you get to decide.

Can a Privacy Badge drive consumer confidence on privacy policy?

Can a Privacy Badge drive consumer confidence on privacy policy?
by Dinesh Achuthan | March 5, 2021

As a user/consumer, I always wondered what is in the privacy document or even in terms and conditions document which I blindly scroll through and accept. I talked with a few of my colleagues and friends, and I am not surprised to hear that they also do the same. When the privacy policy or terms and conditions are shown automatically, most of us tend to scroll over and accept it as we know there are no other options other than accepting it if we want to use the application. In the same line of thought, the visual display of sites’ and apps’ security was enhanced a couple of decades ago. We started to see trusted badges, verified by third-party badges, to provide a quick impression on the app/site security. There is a company called TRUSTe who started this idea of providing badges based on privacy policy two decades ago but now it is acquired by a different company and the idea of the badge has changed to drive more e-commerce business rather than to establish the intended privacy-policy trust with the end consumers.

My idea of a privacy badge originated from this idea of security badges, payment partner badges and other types of badges to instill confidence and trust with the end consumers. Why not provide a privacy badge or even terms-and-conditions badge either through a third-party service or via a self-assessment framework for any site/mobile app? Will this in any shape or form help the end consumer? Can a company or industry use this framework to assess themselves to improve their privacy policy? As a user, will it provide me some sense of security to see some badges instead of scrolling through pages and pages of privacy documentation? After thinking through and talking with few of my colleagues I started to think on how to create this privacy self-assessment framework through a methodological thought process and establish a scoring template to self-determine a privacy badge for any privacy policy. If we have such a thing, how would it look like?

I would like to share my approach with limited scope and validate whether it will work before embarking on larger scale. So, I constrained myself to US location and left EU’s GDPR and Germany’s BDSG and any other Asian privacy laws. First, I need to design a privacy assignment framework. What should be there in the framework?

1. I definitely want what an end consumer sees important for his privacy. How can I get this? I started to think about privacy related lawsuits in the past one decade.
2. I definitely want how a corporate or a company thinks about user privacy aligned to their business model. I can get this for any company through privacy policy.
3. Finally, I want something to map consumer thought to corporate thought via what is legally binding, which are the US privacy laws.

To stitch all the above three together, I decided to use the leading three academic privacy frameworks (Solove’s Taxonomy, Mulligan et al.’s Analytic, Nissenbaum Contextual Integrity) and below is the approach I used.

Assessment Framework Design and validation approach
1. Design Privacy categories based on 3 leading academic privacy framework (Privacy Assessment Framework)
2. List US Legal framework in consideration
3. Analyze the top 5-10 Privacy lawsuits and map to privacy categories to which the lawsuit fits.
4. Design Qualtrics privacy lawsuit questionnaire to get user perspective on the lawsuit categories
5. Design Qualtrics privacy baseline questionnaire to get user perspective on top 5-10 good privacy policies and bottom 5-10 bad privacy policies
6. Compute weights for each privacy category with inputs from the Qualtrics survey. Establish privacy score to badge matrix.
7. Compute privacy score with the assessment framework by evaluating 3-5 random privacy policies from the industry. Higher the score better the privacy policy and higher the badge.
8. Validate whether the badge fits with leading privacy experts.

Sample view of privacy score to badge mapping. There are further templates and charts which I omitted to include in this blog to keep it simple.

Assessment Score, Privacy Badge
0-25, Copper
26-40, Bronze
41-60, Silver
61-80, Gold
80-100, Platinum

Sample view of privacy assessment scoring template

Conclusion
I believe this framework will help both the consumers as well as companies. Companies and corporates can use this framework and start self-evaluating their privacy policies and at least get a basic understanding of their score. As a consumer I can get an approximate handle on the privacy policy based on the score or the badge.

 

REFERENCES

https://www.varonis.com/blog/us-privacy-laws/
https://www.trustsignals.com/blog/77-trust-signals-to-increase-your-online-conversion-rate
https://www.trustsignals.com/blog/what-is-a-trust-badge

US Privacy Lawsuits:
● New York Attorney General Letitia James announced her office reached a settlement with Dunkin’ Donuts over the handling of its 2015 data breach of approximately 20,000 customers. The settlement includes $650,000 in penalties, along with new requirements for data security.
● U.S. District Judge Charles Kocoras in Chicago threw out a motion to dismiss IBM’s case over Illinois’ Biometric Information Privacy Act violations regarding the use of facial images from Flickr, Reuters reports.
● Related to IBM, MediaPost reports Amazon and Microsoft are seeking dismissal of Illinois’ BIPA cases of their own regarding their use of the same images held by IBM.
● Facebook reaches a $650 Million settlement for facial recognition technology used to tag photos by storing biometric data (digital scans of users’ faces) without notice or consent violating Illinois’s BIPA.
● FTC and New York Attorney General fine Google and Youtube $170 Million for collecting personal information of children (persistent identifiers) violating COPPA.
● https://github.com/FortAwesome/Font-Awesome/issues/13833 (badge image)
● (As claimed at https://www.trustsignals.com/blog/the-history-of-the-truste-seal-and-why-it-still-has-value Companies who display the TRUSTe Certified Privacy seal have demonstrated that their privacy policies and practices meet the TRUSTe Enterprise Privacy & Data Governance Practices Assessment Criteria. It’s fair to say that TRUSTe is no longer the preeminent trustmark to website visitors. Many have never heard of the organization or know of its history, and many other entities and regulations have stepped forward in the privacy and security space)

DNA Databases: The Line Between Personal Privacy and Public Safety

DNA Databases: The Line between Personal Privacy and Public Safety
by Brittney Van Hese | March 5, 2021

Recently customers of popular ancestry companies, such as GEDmatch, learned that the DNA data they had submitted to learn about their family was secretly being searched by police to solve crimes. While the contribution made to putting away some of the vilest criminals – like the Golden State Killer – has been touted by law enforcement as a win for society, the revelation of policing searching genealogy profiles without user knowledge has raised questions about the line between consumer privacy and public safety.


Image Source

Genealogy uses the DNA associated with ancestral linage to establish a family connection between a perpetrator’s sample and the uploader. Then, manually, an analyst builds down a family tree from that connection using public records such as birth certificates, death records, and marriage licenses. The family tree is then used to generate a focused suspect pool, at which point investigative police work takes over to build a remaining case sufficient for arrest.


Image Source

For the Golden State Killer, police obtained the family tree data by acting as a normal user uploading DNA to find a relative, not identifying themselves as police. These approaches have shone a light on the previously unconsidered legal and ethical concern of police access to consumer data for the public good. Up until the Golden State Killer case, GEDmatch was not even aware police were using their services, users were unaware they were cooperating with police, and no regulation existed on the subject.

Now that the discussions have started, two sides of the argument have naturally emerged. Those in law enforcement who believe in the beneficence of their work see little harm in the practice. It is solving terrible crimes which would otherwise be left to turn cold. Additionally, legal non-profits like DNA Doe Project, access genealogy resources to identify John and Jane Doe victims – bringing closure to families.

On the other side of the argument, users voice concerns about consent, constitutionality, and police misconduct. Firstly, users uploading their profiles were not volunteering to be included in a suspect database and their consent was never given for their data to be searched. Additionally, these searches were conducted without warrants, which is in conflict with recent supreme court precedent regarding obtaining public database information. Lastly, there are members – like Michael Usry – who were targeted as a suspect because their profile is closely related to the culprit’s family tree. Opening the door for police misconduct such as bias efforts being made to confirm the genealogy results.

In response to the debate, DNA and genealogy companies have altered their privacy policies to attempt pleasing both sides by creating an opt-in policy for users. By opting in, users are agreeing to add their profiles to a database that is available to police. However, the glaring concern that arises from this approach is that the opt in does not actually impact the individual who is sharing the data – as it is most likely they would share the data with the knowledge they have not committed any crimes. The problem is that the person choosing to exercise their personal freedom to opt in and share their data with police are doing so on the behalf of distant relatives who may have committed these crimes. This presents a not only a moral dilemma with implicating others’ privacy but also applies ethical pressure in public safety, making this a particularly difficult situation.

Luckily, there is a path forward through legislation. First and foremost, the process still relies on proper due process of the criminal justice system; a judge must grant a warrant to conduct searches on the databases have users consent. Warrants can only be requested for this purpose only if the case is a violent personal crime, such as homicide or rape, and has exhausted all other investigative resources. Most importantly, the scope of genealogy data is still limited by current technology to only point investigators in a general direction, from which investigators must still rely on using evidence-based crime solving to make an arrest. For now, Federal regulation of genealogy data usage in crime fighting strikes a sufficient balance in privacy and policing; but this legislation will need to be closely monitored as genealogy technology advances.

References

Akpan, Nsikan. “Genetic Genealogy Can Help Solve Cold Cases. It Can Also Accuse the Wrong Person.” PBS, Public Broadcasting Service, 7 Nov. 2019, www.pbs.org/newshour/science/genetic-genealogy-can-help-solve-cold-cases-it-can-also-accuse-the-wrong-person.

“DNA Databases Are Boon to Police But Menace to Privacy, Critics Say.” DNA Databases Are Boon to Police But Menace to Privacy Critics Say | The Pew Charitable Trusts, www.pewtrusts.org/en/research-and-analysis/blogs/stateline/2020/02/20/dna-databases-are-boon-to-police-but-menace-to-privacy-critics-say.

“DNA Doe Project.” DNA Doe Project Cases, 5 Mar. 2021, dnadoeproject.org/.

Payne, Kate. “Genealogy Websites Help To Solve Crimes, Raise Questions About Ethics.” NPR, NPR, 6 Mar. 2020, www.npr.org/2020/03/06/812789112/genealogy-websites-help-to-solve-crimes-raise-questions-about-ethics.

Schuppe, Jon. “Police Were Cracking Cold Cases with a DNA Website. Then the Fine Print Changed.” NBCNews.com, NBCUniversal News Group, 29 Oct. 2019, www.nbcnews.com/news/us-news/police-were-cracking-cold-cases-dna-website-then-fine-print-n1070901.

Zhang, Sarah. “The Messy Consequences of the Golden State Killer Case.” The Atlantic, Atlantic Media Company, 2 Oct. 2019, www.theatlantic.com/science/archive/2019/10/genetic-genealogy-dna-database-criminal-investigations/599005/.

When is faking it alright?

When is faking it alright?
by Randy Moran | March 5, 2021


Photo by Markus Winkler on Unsplash

The AI news has been littered with Deep Fake articles over the last couple of years. Some articles are about using it for fun (CNet), some are using it to demonstrate technical capability, like with recent Tom Cruise fakes (Piper). And, some are using it maliciously to harm or try to sway opinions and rally opposition (“Malicious use of deep fakes is a threat to democracy everywhere”). All of this points to the fact that AI is just technology, a tool to be used for either good or bad purposes.

The recent announcement of WE-FORGE (DARTMOUTH COLLEGE), takes faking in a whole different direction. WE-FORGE can generate fake, realistic documents, not for fun and not quite for malicious reasons, but to obfuscate actual content; for the purposes of counter espionage. This AI approach can be used to hide corporate or national security documents within the noise of numerous other fake documents. As their announcement points out, this noise aspect was used successfully in WWII to thwart efforts in discovering upcoming military maneuvers in Sicily.

The above announcement leads to thinking about its application for obfuscating (hiding) individuals activity from tracking. As we have reviewed in the w231 course work, individuals’ have limited to no control over their data in today’s web, social, and application landscape. We have seen that privacy policies, for the most part, serve and cover a firm more so than the individual. True protective legislation is years away; and will still be guided by procedure over individual rights and likely to be fought hard from highly profitable tech firms. The procedures to control your own information are laborious and don’t completely provide the controls one would want. The only choices are to live with it so you can utilize the service or stop using the service altogether, limiting one’s ability to connect and participate in the good aspects of the technology.

Helen Nissenbaum, whose privacy framework outlined week 5 in “A Contextual Approach to Privacy Online” (Nissenbaum 32-48) was a co-author, along with Finn Brunton, of a book called “Obfuscation: A User’s Guide for Privacy and Protest” (Brunton and Nissenbaum). In that book, they outline obfuscation as “the deliberate addition of ambiguous, confusing, or misleading
information to interfere with surveillance and data collection.” They outline numerous variations for obfuscating your identity; chafing, location spoofing, disinformation, etc. As they state, in chapter three, “privacy is a multi-faceted concept, and a wide range of structures, mechanisms, rules, and practices are available to produce it and defend it.” There are legal mechanisms, there are technical solutions, and there are application options. To these, their goal in that chapter, they feel the need for an individual to utilise obfuscation; to produce noise that looks like normal activity so as to hide the actual activity. It provides an individual way to camouflage activity when other aspects fail. It is similar to the WE_FORGE process above, but for individuals.

To enable that strategy, Helen partnered with Daniel Howe and Vincent Toubiana, to develop a tool called “TrackMeNot” (Howe et al.) that puts these ideas to practice. It provides a browser plugin for Google Chrome and Mozilla Firefox, an option so that your activity can be obfuscated. It generates random search queries through several search services (Google, Bing, Yahoo,etc.) to hide an individuals’ actual search history. One could spend the time to do it manually, but the systematic approach is much more efficient.

While legislation may come in time, and individuals may gain control of their data eventually, they can now hide their activity. This is not necessarily going to be sought out by everyone. It will likely only be used by those aware of the lengths organizations and companies have gone through to identify and categorize users. As the authors put it, “it’s a small revolution” for those interested in mitigating and defeating surveillance. To a common individual, it’s an effort they don’t care to spend. To the few aware individuals, it’s one small step towards gaining back control of one’s own privacy. The browser plugins provide obfuscation for at least this one specific aspect of user activity.

Still, I can see additional applications being developed in the future, in social networking apps and other service areas, utilizing AI to generate noise that the application AI mechanisms are trying to capture and identify. Just as hackers use AI to infiltrate networks (F5), AI is now being used by software (IBM) to identify and counter those attacks. Most folks know AI/ML is being
used to catalog and categorize individuals and their activity; the next obvious step is to use the technology to thwart that activity for those that are concerned. To some, including the companies capturing the data, it may seem wrong to pollute the data. Still, it is justified and warranted to those individuals who care about their privacy since laws have not caught up to stop the proverbial data peeping-toms. In the latter case, they just have to look at more
information, which is no different from what WE-FORGE is trying to accomplish with its counter-espionage tactics.

 

Bibliography

  • Brunton, Finn, and Helen Nissenbaum. Obfuscation: A User’s Guide for Privacy and Protect. The MIT Press, 2015.
  • CNet. “26 Deep fakes that will freak you out.” CNet Pictures, 15 Jan 2020,
    https://www.cnet.com/pictures/26-deepfakes-that-will-freak-you-out/.
  • DARTMOUTH COLLEGE. “Cybersecurity researchers build a better ‘canary trap.’” EurekAlert, American Association for the Advancement of Science, 1 Mar 2021, https://www.eurekalert.org/pub_releases/2021-03/dc-crb022621.php.
  • F5. “AI-powered Cyber Attacks.” 2020, https://www.f5.com/labs/articles/cisotociso/ai-powered-cyber-attacks.Howe, et al. “TrackMeNot.” Trackmenot.io, 2016, http://trackmenot.io/.
  • IBM. “IBM Security.” Artificial intelligence for a smarter kind of cybersecurity, 2021, https://www.ibm.com/security/artificial-intelligence.
  • “Malicious use of deepfakes is a threat to democracy everywhere.” The Startup, 2019, https://medium.com/swlh/malicious-use-of-deepfakes-is-a-threat-to-democracy-everywhere-51a020bd81e.
  • Nissenbaum, Helen. “A Contextual Approach to Privacy Online.” Daedalus, vol. Fall, no. 2011, 2011, pp. 32-48, https://www.amacad.org/sites/default/files/daedalus/downloads/Fa2011_Protecting-the-Internet-as-Public-Commons.pdf.
  • Piper, Daniel. Creative Boq, 1 Mar 2021,
    https://www.creativebloq.com/news/tom-cruise-deepfakes

Concerns with Privacy in Virtual Reality

Concerns with Privacy in Virtual Reality
by Simran Bhatia | February 26, 2021

Jamie was only 13 when she started playing her first game in Virtual Reality (VR). She loved it because she was able to create her own avatar, do virtual fist bumps with people from across the world, all while just wearing a headset and some haptic clothes. However, Jamie didn’t know that her “just 20 minute” VR game session was recording 2 million data points about her. She also didn’t know that the owners of the game that she was playing were selling her data to health insurance companies. Years later, when Jamie went to apply for health insurance, she was turned down because her body movement in VR data classified her as having high likelihood for chronic pain regional syndrome. While this is a made up situation, this is the power of data collected through VR. As of 2021, there are no regularizations or standards for data collected through VR which is scary because the VR market will hit $108 billion this year.

With the VR field expanding into a range of different fields, from healthcare to entertainment, there is high concern of the privacy of data being collected in this field. More different applications for VR means a diverse portfolio and volume of data being collected on each user, and currently with no regulation.

What is VR?

Virtual Reality (VR) is a technology that creates simulations of environments, and enables users to interact in these environments, through different devices. In the past, industries such as aerospace and defense have used VR for training and flight simulation purposes, but more recently, it has become an avid gaming tool, especially in the post-pandemic world. It has been pitched as the next great communications platform and user interface.

What does privacy in AR/VR mean?

As said, with great power comes great responsibility, the fascinating technology of VR brings with it an unprecedented ability to track body motions and consequently collect data on it’s users. Research has shown that the identifiability of users under VR, with specific tasks, the system was able to identify 95% users correctly when trained on less than 5 minutes of tracking data per person. Another research shows that with combined data of eye-gaze, hand position, height, head direction, biological and behavioral characteristics, there is 8 to 12 times better accuracy of identifying users, as compared to chance.

With each individual’s unique patterns of movement, anonymizing VR tracking data is nearly impossible, at least so far. This is because no person has the same hand movement as another. Similar to IP address, zip code and voice print, VR tracking data should be considered as “personally identifiable information” because it can be used to trace an individual’s identity. This type of data is similar to data in health and medical research, such as DNA sequence, which even when stripped of names and other identifying information, can be traced back to individuals through simple compilation with other public data sources. The reason of concern is that unlike medical data, VR tracking data is currently unregulated on how it is collected, used and shared, as it is not monitored by any external entity.

With Oculus dominating the hardware space in the VR industry currently, another area of concern was Oculus’ announcement that it will require a Facebook account for all users. This means that users are forced to accept Facebook’s Community Standards, which means that users can no longer remain completely anonymous on their device and that Facebook will own all VR tracking data along with their social media data, through Facebook, Instagram and Whatsapp. This puts Facebook, as a company, on having monopoly on most parts of a users’ data.


Source

Another privacy threat is posed by the setup of VR devices, with densely packed cameras, microphones and sensors that collect data about a users’ environment. This environment can be a users’ home, office, or community space which is getting exposed, as well.

What can be done in the future?

Privacy in VR will depend on concrete action now, not just through one person or organization, but instead as a community driven action. VR enthusiasts and technology reviewers need to prioritize privacy-conscious practice, and encourage the community to take actions towards regularization of the VR tracking data. VR developers need to take steps to ensure that they make their work transparent, yet secure. Most importantly, industry leaders need to introduce unique principles for monitoring and creating transparency on each part of the VR data process – collection, aggregation, processing, analysis and storage with utmost importance and security. As an industry practice, only data necessary for core functionality of the VR device or it’s software should be collected; moreover, each data point collected should be purposeful and companies should be transparent about the sensitive functionality of the data they collect.

The next step’s responsibility lies on the shoulders of VR users. Users need to be more aware about what they are giving consent to, when they sign up for VR games or other applications. Novice users, like Jamie, need to read the current Terms and Conditions for each part of the VR process and raise their voices to the industry if they are not comfortable with the data that they collect. Users need to be aware of their rights with the VR tracking data now, or else it might be too late.

References

Bailenson, J. (2018). Protecting Nonverbal Data Tracked in Virtual Reality. JAMA Pediatrics, 172(10), 905. https://doi.org/10.1001/jamapediatrics.2018.1909

Erlich, Y., Shor, T., Pe’er, I., & Carmi, S. (2018). Identity inference of genomic data using long-range familial searches. Science, 362(6415), 690–694. https://doi.org/10.1126/science.aau4832

Oculus. (2020, August 27). Facebook Horizon Invite-Only Beta Is Ready For Virtual Explorers | Oculus. Oculus Blog. https://www.oculus.com/blog/facebook-horizon-invite-only-beta-is-ready-for-virtual-explorers/

Pfeuffer, K., Geiger, M. J., Prange, S., Mecke, L., Buschek, D., & Alt, F. (2019). Behavioural Biometrics in VR. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–12. https://doi.org/10.1145/3290605.3300340

The Yale Tribune. (2019, April 12). VR/AR Privacy Concerns Emerging with the Field’s Development. https://campuspress.yale.edu/tribune/vrar-privacy-concerns-emerging-with-the-fields-development/

Miller, M.R., Herrera, F., Jun, H. et al. Personal identifiability of user tracking data during observation of 360-degree VR video. Sci Rep 10, 17404 (2020). https://doi.org/10.1038/s41598-020-74486-y

Diez, M. (2021, January 29). Virtual Reality Will Be A Part Of The Post-Pandemic Built World. Forbes. https://www.forbes.com/sites/forbesrealestatecouncil/2021/02/01/virtual-reality-will-be-a-part-of-the-post-pandemic-built-world/?sh=a08553348ded

Should TikTok be Banned?

Should TikTok be Banned?
by Mikayla Pugel | February 26, 2021

In the last couple of years and with large technology companies, there have been many concerns about data collection and processing, however, the issues were always kept inhouse. With the creation and rise of TikTok, the issue has been taken to another level since the data collection is leaving America. Throughout this article, I will discuss concerns about TikTok and reasons why some people want it banned, as well as walk through some reasons why the concerns may be misplaced and why the ban has not happened yet.

First, many American entities have already banned TikTok from their workers’ devices. Some of these groups include The Democratic National Committee, The Republican National Committee, the Coast Guard, the Marine Corps, and the TSA (Meisenzahl). Leaders from these groups are worried about the app gaining sensitive information from the device it is downloaded to. These worries are not without a warrant as Apple’s iOS 14 caught TikTok secretly accessing user’s clipboards (Doffman, July 09). However, other tech companies were caught doing the same thing, but TikTok was the only second-time offender. The concern around TikTok getting sensitive information is not limited to it being a vast tech company, but mainly because it is a Chinese-based company, and there are concerns about where the data may end up.

TikTok collects an abundance of data from all over the world, and many foreign leaders are concerned that the data may fall into the hands of Communist China. The company has made many claims that they would never give up user data to their government, however, the Chinese National Intelligence Law of 2017 says “any Chinese company can be drafted into espionage, and a company could be forced to hand over the data” (Ghaffary). These concerns of the foreign leaders seem validated and even the government of India has already taken the step to ban the Chinese company (Meisenzahl). However, the ban increased conflict between the two countries, and there would be similar fallout if American’s were to take similar steps.

The US and China already have their issues and there are concerns that if the US were to ban TikTok, the country’s relationship would continue to decline. There is the fear of retaliation from China as well as other Countries following similar bans on all large tech companies, most of which are American (Ghaffary). The Chinese government already has bans on major US tech companies and has worked to create copies of companies like Google, Facebook, and Uber. Americans are concerned that if Countries start to become paranoid with other companies owning their data, then the American economy will be hit hard.

There are many data collection and storage concerns, as there are with most technology companies, however, TikTok has been the leader in one main issue with data, and that is the collection and storage of data collected from children. The US has many laws on what data can be collected from children starting at a certain age, and since TikTok’s main user base is children, they have been at the head of a lot of controversies. TikTok recently agreed to pay $5.7 million in a settlement with the Federal Trade Commission over allegations of illegally collected personal data from children (Doffman, August 11). The FTC has also accused them of exposing locations of young children and not complying when they were instructed to delete certain information that had been collected from minors (Doffman, August 11).

Altogether there are many concerns with data collection and processing from foreign companies, the largest concern may come from the fear of censorship and manipulation of the public opinion within the site (Matsakis). As we have seen with the power Facebook holds over public opinion, TikTok could someday hold this much power, and it would be in the hands of the Chinese Government. Many leaders are concerned about this power, however, banning TikTok would not necessarily free the country from concerns of social media manipulation.

In conclusion, there are valid reasons to be concerned about TikTok, but in contrast, there are a vast number of reasons to not ban it. Many of the concerns brought up could be applied to most American technology companies and because of this, I do not believe the US government is ever going to do anything to remove TikTok’s place in America. Our government instead should be taking a step further to look at policies that apply to all data collected from any company, or how to decrease internet manipulation through education of our citizens, as it seems hypocritical to bash TikTok when we have Facebook to claim as ours.

References:
Doffman, Z. (2020, August 11). TikTok users-here’s why you should be worried. Retrieved February 22, 2021, from https://www.forbes.com/sites/zakdoffman/2020/08/11/tiktok-apple-iphone-google-android-data-security-update-warning-investigation-trump-ban/?sh=3b04029f3436
Doffman, Z. (2020, July 09). Yes, TikTok has a Serious China PROBLEM-HERE’S why you should be concerned. Retrieved February 22, 2021, from https://www.forbes.com/sites/zakdoffman/2020/07/09/tiktok-serious-china-problem-ban-security-warning/?sh=2445db3e1f22
Ghaffary, S. (2020, August 11). Do you really need to worry about your security ON Tiktok? Here’s what we know. Retrieved February 22, 2021, from https://www.vox.com/recode/2020/8/11/21363092/why-is-tiktok-national-security-threat-wechat-trump-ban
Matsakis, L. (n.d.). Does TikTok really pose a risk to US national security? Retrieved February 22, 2021, from https://www.wired.com/story/tiktok-ban-us-national-security-risk/
Meisenzahl, M. (2020, July 13). Trump is considering banning Chinese social media app tiktok.
see the full list of countries, companies, and organizations that have already banned it. Retrieved February 22, 2021, from https://www.businessinsider.com/tiktok-banned-by-countries-organizations-companies-list-2020-7

What’s your data worth?…

What’s your data worth?…
by Anonymous | February 26, 2021

…asks Alexander McCaig, CEO of Tartle at the end of an introductory video on company’s website. According to Alex, commercial enterprises around the globe make billions of dollars every year by selling their customers’ data (McCaig, 2020). Revenues generated by sales to third parties likely pale in comparison to enterprise value created through primary use of data to generate customer insights with potential to increase revenues and lower costs. Despite providing consent, some customers may not be fully aware of of how and what type of data about them is being collected and whether or not it is being sold.

Data privacy laws passed in recent years (e.g. GDPR, CCPA) have provided consumers with better information and greater control over their data. The laws have forced private enterprises and public institutions to offer greater transparency into their data collection, processing, usage and selling practices. Regulators hope that these new laws will lead to an increase in general population’s awareness of how individuals’ data is being used. Furthermore, to the extent that policies are effective, customers are likely to attribute greater, but still unknown, value to their own data.

Tartle, along with a handful of other private companies, believes that data is a precious asset, the value of which can be determined in the open market. Tartle’s success in helping individuals monetize their data ‘asset’ through secure and far-reaching marketplace connecting eager buyers and motivated sellers, at scale, may give society a big hand in equalizing the data privacy playing field.

Ignorance is Bliss, Seduction is Powerful

In an earlier blog post, Robert Hosbach discusses the “privacy paradox,” a phrase used to describe the significant discrepancy between stated concerns about privacy and actions taken to protect it (R. Hosbach, 2021). Lack of action is attributable to a number of factors, with individual ignorance being a meaningful contributor. According to one paper, up to 73% of American adults believe that the presence of privacy policy implies that their data will not be misused (J. Turow et al, 2018). What further exaggerates complacency are deliberate efforts by commercial enterprises to lead consumers into a sense of resignation by relying on four tactics of seduction: placation, diversion, misnaming and using jargon (NA Draper et al, 2019). Consumers need more help, and society needs to do more.

Evolving Policy Landscape, the “Carrot” or the “Stick”

“The new privacy law is a big win for data privacy,” says Joseph Turow, a privacy scholar and professor of communication at the Annenberg School for Communication at the University of Pennsylvania (Knowledge@Wharton, 2019). While 2020 was viewed as a big year for privacy professionals, 2021 may even be bigger. In addition to California passing “CCPA 2.0” late last year, a large number of other states have proposed new legislation. Moreover, with new administration taking office in January, some privacy advocates hope that 2021 will be the year in which U.S. passes GDPR-like federal privacy legislation (Husch Blackwell LLP, 2021). Stricter privacy laws may serve as an effective “stick,” but where is the “carrot”?

“Change Brings Opportunity”

This famous quote by Nido Qubein is used frequently by business leaders facing uncertainty. While evolving regulatory frameworks are likely to disrupt businesses for the benefit of consumers, they are unlikely to slow exponential growth of data. One Mckinsey & Co study points to a 300% growth in IoT to 43 billion data producing devices by 2023 and a 7-fold increase in number of digital interactions by 2025 (McKinsey & Co, 2019). Evolving privacy laws, greater customer awareness, combined with our ever-increasing reliance on data have given birth to companies like Tartle. While motivated by financial gain, these companies are also purpose-driven with potential to reduce income inequality across the globe and put monetary value on individual’s data privacy. So, ask yourself what is your data worth to you, and would you be willing to sell it?

Sources

McCaig, Alexander. Tartle.co (2020, January 8). https://www.youtube.com/watch?v=rslKr3W-Ex8&feature=youtu.be

Maintaining Privacy in Smart Home. Hosbach, Robert (2021, February 19). Retrieved from https://blogs.ischool.berkeley.edu/w231/blog/

Turow, Joseph & Hennessy, Michael & Draper, Nora. (2018). Persistent Misperceptions: Americans’ Misplaced Confidence in Privacy Policies, 2003–2015. Journal of Broadcasting & Electronic Media. 62. 461-478. 10.1080/08838151.2018.1451867.

Draper NA, Turow J. The corporate cultivation of digital resignation. New Media & Society. 2019;21(8):1824-1839. doi:10.1177/1461444819833331

Your Data Is Shared and Sold…What’s Being Done About It?. Knowledge@Wharton (2019, October 28). Retrieved from https://knowledge.wharton.upenn.edu/article/data-shared-sold-whats-done/

The Year To Come In U.S. Privacy & Cybersecurity Law, Husch Blackwell LLP (2021, January 28). Retrieved from https://www.jdsupra.com/legalnews/the-year-to-come-in-u-s-privacy-9238400/

Growing opportunities in the Internet of Things, McKinsey & Co (2019, July 29). Retrieved from
https://www.mckinsey.com/industries/private-equity-and-principal-investors/our-insights/growing-opportunities-in-the-internet-of-things?cid=eml-web