Privacy Implications for the First Wave of Ed Tech in a COVID-19 World

Privacy Implications for the First Wave of Ed Tech in a COVID-19 World
By Daniel Lee | March 30, 2020

Educational institutions have been among the first to implement sweeping operational changes to adjust to and combat the realities of COVID-19. As of this writing, over 100 colleges and universities have shifted their courses from physical to online classrooms to mitigate the dangers and spread of coronavirus, with a growing number of primary and secondary (K-12) school districts also rapidly adopting various distance learning models. Instead of calling on raised hands, passing out homework sheets, or filling lecture halls, instructors are now fielding homework questions via Twitter, disseminating study guides via YouTube, and leveraging online collaboration platforms such as Zoom or Google Hangouts to host synchronous lectures. This has spawned a mad rush to understand how to live and learn in this new world: instructors scrambling to digitize curriculum and content, CIO offices frantically approving new learning technologies and tools, and students and teachers alike adjusting to a wholly new learning environment.

While these changes have been largely disruptive operationally, they are remarkably less disruptive in the transformative sense compared with how we might traditionally evaluate education technology initiatives. Instead, “first wave” education technology responses to COVID-19, such as virtualizing the delivery of learning content and resources, largely focus on the substitution of physical tools or processes with virtual ones, with little to no functional change from the status quo. Dr. Ruben Puentedura’s SAMR model provides a helpful framework for thinking about the varying degrees of technology integration in teaching. Using the SAMR model, we broadly categorize most first wave initiatives as “substitution” given the primary objective of integrating technology to enable students to participate in class without being physically co-located with teachers, and not necessarily to redefine education practices or promote a higher standard for learning efficacy overall. If we can convincingly conclude that many first wave initiatives are largely focused on the substitution of existing classroom functions with digital ones, this also provides a really focused point of comparison for analyzing the legal, ethical, and privacy implications of these first wave education technologies against their analog counterparts.

Frameworks like Nissembaum’s contextual integrity are particularly helpful here to compare and contrast the information flows and inherent privacy implications for physical tools with those of their virtual counterparts, especially for simple scenarios such as taking attendance, participating in lecture, or submitting an assignment where the only difference between physical and virtual is the tooling itself. Traditional classroom information flows between the data subject (the student) and the recipient (the instructor) are generally subject to the transmission principle that this information is secured and employed only within the boundaries and needs of the classroom or school itself. However, when virtual tools are employed for these activities, third-party entities emerge as additional recipients of this information and compromise the contextual integrity of physical classroom activities. We reach similar conclusions when considering Solove’s taxonomy. In most scenarios, data subjects remain mostly consistent across both the physical and virtual toolsets, but the presence of third-party entities in virtual alternatives extends who participates in information collection, data holding, information processing, and information dissemination of personal student and instructor information in these scenarios.

The reason for these deviations, of course, is that many first wave tools are oriented towards making analog information digital and easily accessible by many, and the digitization of these educational tools and processes, by definition, requires the hosting and distribution of classroom content to third-parties. As a result, when classroom information is shared broadly through online platforms like Twitter, Youtube, or Zoom, that information is exposed to a much broader ecosystem of service providers who are traditionally not in play at all or who participate to varying, but lesser degrees in physical classroom environments. As the virtualization of analog information also brings the possibility of generating additional information on data subjects based on their attendance and participation in specific classroom events and their social relationships to other participants, we must also consider the various ways that this data can be fused with or exploited to generate additional insights about data subjects that might have been previously accessible.

As a result, we must carefully consider how the original transmission principles and contexts are either enabled or jeopardized by the involvement of third-party entities, and identify new processes or methods for protecting personal information in virtual classroom settings. While first wave initiatives have made it possible to continue learning activities virtually in the face of COVID-19, they add additional complexity to the classroom privacy landscape and expose new opportunities for inappropriate disclosures or misuse of personal information. We must dutifully consider the ethical, legal, and privacy implications of these first wave technologies and the accountability for safeguarding personal data, especially as many third-party technologies abide by their own set of governing policies on how that information is used, disseminated, and secured.

References:
1: https://www.npr.org/2020/03/13/814974088/the-coronavirus-outbreak-and-the-challenges-of-online-only-classes

2:http://www.hippasus.com/rrpweblog/archives/2014/06/29/LearningTechnologySAMRModel.pdf

3: https://s.abcnews.com/images/International/coronavirus-school-japan-ap-rc-200305_hpMain_16x9_992.jpg

Mobile Apps Know Too Much

Mobile Apps Know Too Much
By Annabelle Lee | March 6, 2020

In the modern days, we are not surprised by the fact that the technologies we are using everyday are collecting our personal data to some extent. The data they are collecting could be what we click on the webpage, or keywords we search for. We understand that tech companies collect data in order to improve the technology and for the advertisement reasons. But do we know what exactly they are collecting? Do we give consent to them at all?

Some of the apps, like Expedia, Hotel.com, and Air Canada, have reported to work with a customer experience analytics firm called Glassbox to collect users’ every tap and keyboard entry by recording the screen. Glassbox’s recording technology allows the companies to do analysis on the data by replaying the screenshots and records. However, sensitive data like passport numbers, banking information and passwords could be exposed in the screenshots as well. There was an incidence that Glassbox failed to encrypt the sensitive data and resulted in exposing 20,000 files to whoever has the access to the database.

We would assume that this should be communicated with the users before downloading and starting to use their apps. However, the Term of Service for Expedia, Hotel.com, and Air Canada, does not mention any of the screen recording action they are conducting in the document. It seems like they are purposely not being transparent and honest about the data collection process. The users would have no idea about what the app is doing in the back just from the document itself. Luckily, Apple has found out this issue and sent out a notice to the companies who conduct screen recordings for analytics through Iphone Apps. Apple told them in an email to remove the code that does the screen recording work immediately. Otherwise, the app would be taken down from the app store.

I found it rather ironic that the companies only started to take actions not because of the laws or policies of our government but a private company like Apple. Why is Apple the one who is guarding our data privacy but not our government? It seems like the issue is that the technology is moving too fast but the lawmaking process is moving too slow. The law couldn’t and don’t know how to regulate companies’ data collection process. There is also no penalty for the companies if they are not being honest or transparent about the data collection process in Term of Service or other documents.

We enjoy the benefits and convenience of cutting-edge technologies every single day. However, our laws are rather behind when it comes to protecting the users’ privacy and forcing the companies to be transparent and honest on their work. In comparison, the EU is doing a much better job on regulating the companies and protecting the citizens’ rights. Hopefully in the near future, we can find a balance among technologies, privacy and security.

Works Cited

Many popular iPhone apps secretly record your screen without asking

Apple tells app developers to disclose or remove screen recording code

Image Source

Ahead of CES, Apple touts ‘what happens on your iPhone, stays on your iPhone’ with privacy billboard in Las Vegas


https://medium.com/@tsybinanatalia/ironhack-add-a-feature-project-expedia-app-3d7e1e3c1f8

What’s Changing after YouTube’s $170 Million Child Privacy Settlement

What’s Changing after YouTube’s $170 Million Child Privacy Settlement
By Haihui Cao | March 6, 2020

YouTube, owned by Google, was fined $170 million for violating the Children’s Online Privacy Protection Act or COPPA in September 2019. The $170 million fine is the largest COPPA fine up to date according to the Federal Trade Commission (FTC). COPPA is a law that requires websites and online services to provide notice and get parental consent before collecting information from kids under 13. YouTube was accused of violating COPPA by gathering children’s data and targeting kids with advertisements using the collected data without parents’ consent.

Although YouTube’s terms of service exclude children under 13, it claimed to be a general-audience site and marketed itself to advertisers as a top platform for young children. From the communications with some toy’s companies such as Mattel, maker of Barbie and Monster High toys, and Hasbro, maker of My Little Pony and PlayDoh, YouTube and Google claim that “YouTube was unanimously voted as the favorite website for kids 2-12” and “93% of tweens visit YouTube to watch videos”. Yet when it came to complying with COPPA, YouTube told some advertising firms that they did not have to comply with the children’s privacy law because YouTube did not have viewers under 13. In Practice YouTube does not require a user to register to view videos, and most videos are not age appropriate. Anyone can view them, and millions of children under age 13, like my kids, do watch them. YouTube served targeted advertisements on these channels even though it knew the channels were directed to children and watched by children.

So, what’s next and how is YouTube changing? Google and YouTube need to implement a number of changes to comply with COPPA. Starting January 2020, YouTube started a system that asks video creators and channel owners to label and categorize their YouTube content as “directed to children”, and therefore remove the personalized ads and comments. Creators will also have to categorize each of their previously uploaded videos and even their entire channels as needed. According to YouTube, it is using artificial-intelligence algorithms to check the content labels and that it might override some settings “in cases of error or abuse.” The big problem is that nobody is sure what “child-directed content” means exactly. Sometimes it’s too difficult to tell the difference between what’s child-directed content and what’s not. The FTC provides only general rules of thumb about whether content is “directed to children” and thus subject to COPPA. Popular YouTube videos watched by children and adults, such as gaming, toy reviews, and funny family videos, may fall under gray areas. The creators will be held liable if the FTC finds COPPA violations. Google specifically called on the FTC that “the current ambiguity of the COPPA Rule makes it difficult for companies to feel confident that they have implemented COPPA correctly.” YouTube could only advise creators to consult a lawyer to help them work through COPPA impact on their own channels. YouTube’s new COPPA-compliance rules have significant impact on some creators because they will lose a big part of their ads income under the new rules. As a parent of kids who watch videos on YouTube, I’m glad seeing FTC’s decision on YouTube indicating that the FTC is taking the protection of children’s data seriously. I hope FTC and regulators work out clearer guidelines on the “child-directed” contents and YouTube impose more strict auditing on the “child directed” video contents using its advanced technology in the near future.

YouTube is also promoting YouTube Kids, a separate app on all kids’ videos. The app was launched by YouTube in 2015. It filters the grown-up stuff, funnels the kid stuff to the app, and removes many of the features that are available on the main site. As a parent who is concerned about kids’ privacy and protection, yet never took actions to read the privacy policies before, I would recommend parents to read the privacy policies carefully from now on and explore YouTube Kids yourselves. You can set up parental controls and some features such as the timer which lets you set a limit for your kids on the app. Is YouTube kids safe or right for kids, or not? This is not one-answer-fits-all. Every family is different, and I think parents need to work with their kids to figure out a better answer for your family.

References:
https://www.ftc.gov/news-events/press-releases/2019/09/google-youtube-will-pay-record-170-million-alleged-violations
https://www.ftc.gov/news-events/blogs/business-blog/2019/11/youtube-channel-owners-your-content-directed-children
https://support.google.com/youtube/answer/9383587?hl=en

Amazon Go: A New Era in Data Collection

Amazon Go: A New Era in Data Collection
By Joshua Smith | March 6, 2020

If you haven’t heard of Amazon Go yet, it’s Amazon’s newest concept store. The stores are not especially common with only twenty-six locations in the United States. And, the content is the same as your typical small convenience store. What makes them unique is they introduce ‘just walk out’ technology. Essentially, this means you walk in, grab what you want, and walk out without any human interaction, lines, or checkout which would be brazen theft in any other store.


Credit to forbes.com

Amazon accomplishes this customer experience with an impressive array of technology including computer vision, sensor element fusion, and deep learning according to the app and a patent. This necessarily means it is equipped with ubiquitous cameras, sensing devices, and passive scanners that Amazon seemingly has taken great care to make low-profile. Further, the technologies, and by proxy the store, are heavily data-driven and rely on the information to function. Make no mistake we’re talking about lots and lots of data.

It’s safe to assume that everything you do will be recorded, analyzed, and likely stored in some form by the cameras, sensors, and algorithms that process the data. In fairness, Amazon would be crazy not to do this from a business perspective. Learning from data is the central method to improve their product. This is a technology dependent store after all. Furthermore, Amazon has developed something many other retailers will be interested in for cost-saving, theft mitigation, and optimization. All of these secondary effects will be driven by that data.

So, what does this mean in practical terms? It likely means that when you walk into an Amazon Go store you are walking into a bit of a laboratory. Amazon will have the potential to answer with high-specificity how people shop, interact with products, and move through stores. They will also have the ability to analyze how people interact with each other and their broader environment. This data generates a growing network of questions. Are there machine detectable physical behaviors that will tell us whether someone is about to buy toothpaste? That someone is about to start a conversation with a stranger? That someone is attracted to another person? If they are, do they both have the same dating app so they can be connected? Biomarkers and personal traits could be stored and used to track and evaluate individuals. Is this person losing or gaining weight? Do their gaits indicate they are injured or have some other medical issue? What brand is that worn out sweater so we can send a link to purchase a new one?

This is speculative of course. There isn’t an outward indication that Amazon is tackling these types of problems, but they could try. For now, the microcosm of a convenience store might be too small to answer some of these questions, but it’s likely we’d be surprised what could be discovered. As this technology becomes more common and works its way into other areas of our life, we can almost guarantee that some company will try to answer these questions. With data sharing, the ability would be augmented considerably.

Ubiquitous sensor and vision data that is processed in this way isn’t something we’ve confronted at scale yet. Phones don’t really provide a useful analogy, since they’re largely confined to a pocket or bag and typically require an interaction. More analogous technology from law enforcement and intelligence agencies have made headlines with facial recognition and Gorgon Stare, but the context isn’t the same. It’s not surveillance. It’s something that is being worked into the fabric of daily life, which is an important distinction. This type of data collection represents a shift to the highly personal from a passive collection perspective. Your data will be collected by doing nothing other than being in a public space. It’s not clear what to do about this. It naturally entangles categories of data with elevated protections, privacy rights, and questions regarding the proper application of technology. However, what is clear, is this change requires careful consideration.

Uber can take you somewhere, could it also take advantage of your personal information?

Uber can take you somewhere, could it also take advantage of your personal information?
By Yuze Chen | March 7, 2020

Nowadays, people are using technology to help them get around the city. Ride-sharing platforms such as Uber and Lyft are being used by hundreds and thousands of people around the world to help them get from point A to point B. These platforms help the riders with a lower-than-taxi fare, removes the hassle to book an appointment with taxi companies in advance. They also give drivers an opportunity to earn extra income using their own vehicles. With the convenience to take people around the city on their fingertips, ride-sharing companies are also known for their sketchy privacy policy.


Photo by ​Austin Distel​ on ​Unsplash

What Data does Uber Collects

According to Uber’s ​Privacy Notice​, Uber collects a variety of data including three main aspects: Data provided by user, such as username, email, address, etc; Data created when using Uber services, such as user’s location, app usage and device data; Data from other sources, such as Uber partners who provide data to Uber. Overall, the personal identifiable information (PII) are collected and stored by Uber. Besides that, Uber also collects and stores information that can reconstruct an individual’s daily life, such as location history. With the wealth of PIIs collected and stored by Uber, the user should be concerned about what Uber can do with the data.


Photo by ​Charles Deluvio​ on ​Unsplash Solove’s Taxonomy Analysis

Solove’s Taxonomy provides us a framework to analyze the potential harm on the data subject (the user). According to Solove’s Taxonomy, Uber users are the data subject. When the user uses the Uber service, their personal data are transmitted to Data Holder, which is Uber. Then it will go through Information Processing step in Solove’s Taxonomy, including aggregation, identification, etc. After processing the data, there will be a Information Dissemination step where Uber can take action on user’s data.

Information Collection
Let’s look at the first step in Solove’s Taxonomy – Information Collection. As discussed above, Uber collects data in three ways. However, there is a weird clause in Uber’s privacy notice ​III. A. 2. Location data section​: “​precise or approximate location data from a user’s mobile device if enabled by the user… when the Uber app is running in the foreground (app open and on-screen) or background (app open but not on-screen) of their mobile device​”. This means Uber is able to always collect the user’s current location on background even though the user isn’t using the app. The clause gives Uber an opportunity to always monitor the user’s location after using Uber once. This creates potential harm to the user because it’s possible that a user’s location is always being collected by Uber without consent.

Information Processing
In the Information Processing step of Solove’s Taxonomy, Uber has a couple of sketchy terms in their privacy policy. In ​III. A. 2 Transaction Information, Uber claims that if a user refers a new user with the promo code, both user’s information will be associated in Uber’s database. This practice could be harmful to both user’s privacy and could cause some unpredicted consequences. For example, many users simply post their promotion code to a public blog or discussion board. Without knowing the referee who could use the promotion code, there is potential privacy and legal harm to to associate two strangers. For example, if the referee were under investigation of a crime when using Uber, the referer user could have also been involved in the legal matter, even if they don’t know each other.
Information Dissemination
According to Privacy Policy Section III. D, Uber could share the user data with third parties such as Uber’s data analytics providers, insurance and financing partners. It does not specify who those partners are and how they are able to do with the data. It is obvious there will be potential harms to the user during this step. First, the user might not want their data to be disclosed to the third party other than Uber. This is more privacy concerns

Conclusion
According to our analysis on Uber’s privacy policy, we found plenty of surprising terms that would jeopardize user’s privacy. The data collection, data processing, and data dissemination are all having issues with user consent and user privacy protection. Some are even pulling users into potential legal issues. The wording of the Privacy Notice makes it easy for Uber to take advantage of user data legally, such as selling to its partner. There are a lot that Uber could do to better address those privacy issues in the Privacy Notice.

References
https://www.uber.com/global/en/privacy/notice/
https://wiki.openrightsgroup.org/wiki/A_Taxonomy_of_Privacy

Assessing Clearview AI through a Privacy Lens

Assessing Clearview AI through a Privacy Lens
By Jonathan Hilton | February 28, 2020

The media has been abuzz lately about the end of privacy driven extensively from the capabilities of Clearview AI. The New York Times drew attention to the start up in an article about Clearview AI called ‘The Secretive Company That Might End Privacy as We Know It’, where the paper details the secretive nature of a company that has expansive capabilities with facial recognition. Privacy is a complicated subject that requires extensive context to understand what is and what is not acceptable in a society. In fact, privacy does not have universal meaning and understanding within society or within the academic community. Privacy in one context is not considered a privacy violation in another context. Since this ambiguity exists, we can deconstruct the context of vast facial recognition capabilities with Solove’s Taxonomy to better understand the privacy concerns with Clearview AI.

Prior to examining Clearview AI’s privacy concerns with Solove’s Taxonomy, we need to address what Clearview AI is doing with their facial recognition capability. Clearview AI CEO, Ton That (see figure 1), started the company back in 2017. The basic premise of the company is that their software can take a photo and use a database comprised of billions of photos to identify the person in the photo. How did they come up with this database with out the consent of each person whose likeness is in the database? Clearview AI has scraped pictures from Facebook, YouTube, Venmo and many others to compile their massive database. The company claims the information is ‘public’ and thus does violate any privacy laws. So who exactly uses this software? Clearview maintains that they work with law enforcement agencies and their product is not meant for public consumption.


Clearview CEO Hoan Ton-That

More information about Clearview AI can also be found at their website: https://clearview.ai/

After the Times published the article bringing Clearview AI’s practices to light, there has been intense backlash from the public over privacy concerns. Solove’s Taxonomy should be able to help us decompose exactly what the privacy concerns are.

Solove’s Taxonomy was published by Daniel J. Solove in January 2006. The basic components of the Taxonomy can be found in figure 2 below.


Solove’s Taxonomy

In the case of Clearview AI, the entire public is the Data Subject, or at least those who have either posted videos or pictures or were part of videos or pictures posted by others. So even if a person never posted to social media or a video site, their likeness may be part of the database simply because someone else did post their picture or even caught them in the background of their own photo. An obvious case of harm can come from surveillance according to the Solove taxonomy. Locations, personal associations, activities, and preferences could be derived from running a person’s photo through the system and then viewing all of the results. While Clearview AI claims that their product is to search and not surveil according to their website, it is very difficult to see how the data could not be used for surveillance. The distinction between and search and surveillance is a hard one to draw since searches can quickly produce results that achieve surveillance.

In terms of information processing, the entire purpose of the software is to aggregate data to attain personal identification. With this type of information, great concerns start to arise on the use of this type of capability and data usage. While Clearview maintains that its software serves the needs of law enforcement, the question of secondary use brings great concern. How long until law enforcement adjacent organizations gain access to the software, such as bounty hunters, private security, or private investigators. What other government organizations will gain access to the same capability for purposes such as taxes, welfare disbursement, airport surveillance, or government employment checks. The road to secondary uses of data by the government is well paved by historical precedence such as the use of genealogical DNA for criminal investigations or Social Security numbers becoming de-facto government ID numbers. Furthermore, what happens when Clearview AI is purchased and used by governments to repress their people. More authoritarian regimes could use the software to fully track their citizens and use it against anyone advocating human rights or democracy.


Pictures can be used for much more than law enforcement

Information dissemination also poses a risk that Clearview AI’s results could be distributed to others that would use it for nefarious purposes. For instance, if a bad actor in a government agency distributes a search of their friend or co-worker to another, what type of harm could that cause the person whose images were searched. Additionally, what happens when the software is either hacked or reverse engineered so that the general public can do the same searches or searches are sold on the dark web. It is not a far cry to imagine a person being blackmailed for the search results. We have already seen where hackers will encrypt hard drives to hold people ransom for either money or explicit images of the computer owner, so this would be another extension of the same practice.

Finally, invasion is a real threat to all data subjects. A search of a person’s image that reveals more than who they are but more what they are doing, when they are doing it, and with whom could be used in many ways to affect the person’s day to day decision making and activities. For instance, if this information can be sold to retailers or marketers, a person may find that there is a targeted advertisement is made at the right place and time to influence a person’s decision. Furthermore, the aforementioned bad actor scenario could lead to direct stalking or personal residence invasion if significant details are known about a person. A few iterations of this technology could find the software’s capability mounted on a AR / VR IoT device that will identify people and perhaps some level of personal information as they are seen in public. So basically, a person can where the device and see who each person is that passes them in a mall or on a street.

In conclusion, Solove’s Taxonomy does help deconstruct the privacy concerns associated with Clearview AI’s facial recognition capability. The serious concerns with the software is unlikely to go away anytime soon but unfortunately once this type of technology is developed, it usually does not go away. Society will have to continue to grapple more and more with the growing power and capability of facial recognition.

References:

https://www.popularmechanics.com/technology/security/a30613488/clearview-ai-app/
https://en.wikipedia.org/wiki/Clearview_AI

Clearview AI CEO Defends Facial Recognition Software


Solove, Daniel J., ‘A Taxonomy of Privacy’, University of Pennsylvania Law Review, Jan 2006.

Is Google’s Acquisition of Fitbit Healthy For Users?

Is Google’s Acquisition of Fitbit Healthy For Users?
By Eli Hanna | February 28, 2020

Fitbit Overview

Fitbit is a company that offers wearable fitness tracking devices and online services to help users track their activity and fitness goals. The devices track the number of steps taken, heart rate, sleep patterns and other fitness related information. This data is displayed on a dashboard through the Fitbit mobile or web applications enabling users to view their activity and track their progress towards goals. With over 28 million active users and 100 million devices sold, Fitbit has accumulated a wealth of fitness data on its users.

Given this wealth of user information, Fitbit views consumer trust to be critical to their business and promises to never sell personal user information. In fact, their privacy policy states that personal information will not be shared with third parties with three exceptions:

  • When a user agrees or directs Fitbit to share information. This includes participating in a public challenge or linking your account to an external social media account.
  • For external processing. Fitbit transfers information to external service providers who process the information according to the Fitbit privacy policy.
  • For legal reasons to prevent harm. Where provided by law, Fitbit will provide information to government officials. In such a case, it is Fitbit’s policy to notify users unless they are prohibited from doing so or if there is an exigent circumstance involving a danger of death or serious injury.

Acquisition By Google

On November 1, 2019 Fitbit announced that it had agreed to be acquired by Google. From Fitbit’s perspective, this acquisition means they will have more resources to invest in wearable technology and make their products and services available to a larger population. Google also benefits from the acquisition by purchasing rights to a popular brand of wearable technology, an area where they have struggled to compete with Apple.

Privacy Concerns
Despite the business opportunities this acquisition offers, Fitbit users have voiced concerns that the deal may put their privacy at risk. In contrast to Fitbit’s promise to protect users’ personal information, Google has built its business by collecting user data through its services and selling that information to advertisers. This leaves some customers wondering if their sleeping or exercise habits will soon be sold to the highest bidder.

For their part, Fitbit and Google have promised that personal information collected from Fitbit users will never be sold and “Fitbit health and wellness data will not be used for Google ads.” However, regulators remain unconvinced. In February 2020, the European Data Protection Board (EDPB) issued a statement, “There are concerns that the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.” The EDPB also directed Google and Fitbit to conduct a “full assessment of the data protection requirements and privacy implications of the merger”.

Where It Stands

While the proposed acquisition of Fitbit by Google may provide the needed resources to develop better fitness trackers and services for users, both companies have a long way to go in order to convince consumers and regulators that personal information will be handled responsibly. To address these concerns both companies should update their privacy policies to inform users how their data will be used by Google. In addition, they should take proactive steps to identify how aggregated or de-identified information may be combined with other information collected by Google and outline safeguards to ensure user privacy remains in tact.

A Privacy Policy To Which Nobody Agreed

A Privacy Policy To Which Nobody Agreed
By Andrew Morris | February 24, 2020

On Monday, February 10th, Attorney General Barr announced an indictment of four members of China’s Military for hacking into Equifax.

Equifax operates in a unique space: like Facebook, they have troves of data about a significant number of people. Specific data about financial balances, transactions, payment histories, and creditworthiness. The data may not be as socially personal as Facebook’s, but it is every bit as sensitive, if not more. However, unlike Facebook, nobody agreed to house their data there.

Equifax doesn’t have a privacy policy as much as a marketing page about privacy[1] and a “privacy statement”[2].

In this document, Equifax has taken the time to ensure that they are compliant with laws and best practices about data management and correction right up until the point where it starts to involve sensitive data.

It is worth noting that the California Consumer Privacy Act (CCPA) permits California residents to manage and delete their data. A dedicated page[3] details those rights. However, on my attempt to actually exercise these rights (2/21/2020 at 7:39pm PST), their dedicated site was unresponsive to requests.

Given the scope of recent breaches (147 million US residents), it might reason that regulators and government agencies would address consumer rights in the United States. The FTC made a statement about the Equifax data breach recently and accompanied it with some additional information.[4] On this page, there is a telling ‘question and answer’ that the FTC provides:

Q: I don’t want Equifax to have my data. What can I do?

A: “Equifax is one of three national credit bureaus. These companies collect information about your credit history, such as how many credit cards you have, how much money you owe, and how you pay your bills. Each company creates a credit report about you, and then sells this report to businesses who are deciding whether to give you credit. You cannot opt out of this data collection. However, you can review your credit report for free and freeze your credit.”

In other words, the financial credit system is so essential to commercial operations, the FTC has decided that this data collection is effectively mandatory for most of America.

This organizational system, where private organizations are responsible for infrastructure and data management for the financial system, is not unique to the United States. Some examples highlight the key differences:

  • Germany relies on a company called Schufa Holding AG. [5] However, they provide customers the right to erase, rectify and restrict processing of personal data under GDPR. [6]
  • Austria relies on another company called Kreditschutzverband von 1870 (KSV1870 for short – literally, Credit Protection Association from 1870), which operates as a blacklist-style credit list. This type of system would be un-ideal for granting opt-out rights, and yet they do allow the Austrian Data Protection Authority to intervene. [7]
  • The UK uses a variety of companies. One of them is TransUnion, who manages a specific page on the rights to delete data [8], and it requires some discussion and acknowledgment of the potential consequences, but there is a process to address it.

These exceptions seem to be limited to Europe. Anywhere where the General Data Protection Regulation (GDPR [9]) applies, the specific data subjects have rights. To summarize the legislation, these rights include simple terms and conditions explaining consent, timely notification of data breaches, the right to access your data, the right to be forgotten, data portability, and privacy by design. There is also a significant number of appropriate technical and organizational measures to ensure security levels and risk are commensurate.

In other words, many of the protections built into the GDPR would address both the rights of data subjects and potentially help some of the operational elements that permitted the Equifax data breach. Consumers and data subjects in the United States would benefit from either an expansion of the CCPA or GDPR to cover all residents.

Can You Trust Facebook with Your Love Life?

Can You Trust Facebook with Your Love Life?
By Ann Yoo Abbott | February 21, 2020

If you have ever heard about the Facebook data privacy scandal or emotion experiment, you probably don’t trust Facebook with your personal data anymore. Facebook had let companies such as Spotify and Netflix read users’ private messages, and Facebook was sued for letting the political consulting firm Cambridge Analytica access data from some 87 million users in 2018. It doesn’t end there. For one week in 2012, Facebook altered the algorithms it uses to determine which status updates appeared in the News Feed of 689,003 randomly selected users (about 1 of every 2,500 Facebook users). The results of this study were published in the Proceedings of the National Academy of Sciences (PNAS).

Recently, Facebook has launched its new dating service in the US. They had been advertising privacy is important when it comes to dating, so they consulted with experts in privacy and consumer protection and embedded privacy protections into the core of Facebook Dating. They say they are committed to protecting people’s privacy within Facebook Dating so that they can create a place where people feel comfortable looking for a date and starting meaningful relationships. Let’s see if this is really the case.

Facebook Dating Privacy & Data Use Policy

Facebook Dating’s privacy policy is an extension of the main Facebook Data Policy, and includes a warning that Facebook users may learn you’re using Facebook Dating via mutual friends:

Here are some of the highlights from Facebook’s Data Policy as far as what information is collected when you create a profile and use Facebook Dating:

Content: Facebook collects the “content, communications and other information you provide” while using it, which includes what you’re saying to your Facebook Dating matches. “Our systems automatically process content and communications you and others provide to analyze context and what’s in them.”

Connections: Facebook Dating will also analyze what Facebook groups you join, who you match and interact with, and how – and analyze how those people interact with you. “…such as when others share or comment on a photo of you, send a message to you, or upload, sync, or import your contact information.”

Your Phone: Facebook Dating collects a lot of information from your phone, including the OS, hardware & software versions, battery level, signal strength, location, and nearby Wi-Fi access points, app and file names and types, what plugins you’re using and data from the cookies stored on your device. “…to better personalize the content (including ads) or features you see when you use our Products on another device…”

Your Location: To get suggested matches on Facebook Dating, you need to allow Facebook to access your location. Facebook collects and analyzes all sorts of things about where you take your phone, including where you live, where you go frequently. Even Bluetooth signals, and information about nearby Wi-Fi access points, beacons, and cell towers are part of the information they collect. Facebook also analyzes what location info you choose to share with Facebook friends, and what they share with you.

This list does not include everything that Facebook takes from us. Even after all these informations, they still have more listed on their Data Policy. You’ll want to read it for everything Facebook discloses about what they collect and how it’s used. Do you think Facebook collecting so much of our information is justified? Don’t you think it’s too excessive?

Why the Biggest Technology Companies are Getting into Finance

Why the Biggest Technology Companies are Getting into Finance
By Shubham Gupta | February 16, 2020

With the rise in popularity of “fintech” companies such as Paypal, Robinhood, Square, Plaid, Affirm, and Stripe, the general public is becoming more and more used to conducting monetary transactions through their phones rather than with their wallets. And of course, the biggest tech companies are trying to claim a piece of the pie as well. Apple and Amazon both offer a credit card, Google is planning to offer checking accounts to users who use Google Pay, and Facebook is going as far as creating a new currency called Libra.

Of course, by expanding these new financial products, big tech companies are expanding their revenue streams and are aiming to immerse users more and more into their ecosystem. However, one of the biggest yet most hidden advantages of offering these financial products is that it opens up the doors for big tech companies to collect financial data. For example, processing a payment transaction through services such as Apple Pay or Google pay allow the company to keep a note of what the consumer bought, when they bought it, from where they bought it, and how much money they spent. This information could possibly be used to better understand consumer spending behavior and offer more targeted advertising to consumers.

Another danger of these services is security. Tech companies having access to your sensitive financial information could possibly lead to your information being exposed in the event of a data breach or hack. In the past, both Google and Facebook have suffered through numerous data breaches which exposed millions of consumer’s social media and email accounts. Additionally, because big tech companies are not held to the same level of regulations as banks and other financial institutions, they are more susceptible to operate in ways with your finances compared to banks. These worries are reflected in recent surveys of consumers asked if they trust big tech corporations with their finances. As seen in the chart below, consumer trust of tech companies handling their finances ranges around 50 – 60%. Facebook, probably due to its history of privacy violations, ranks the lowest with around a 35% approval rating.

That’s not to say that big tech’s contribution in the finance space has not come without its merits. Apple’s latest credit card, along with being a sleek piece of titanium, offers users helpful charts and visualizations to help them keep track of their spending and eliminates hidden fees. Facebook’s Libra, despite having a lot of partners back out of the deal, is envisioned to use block chain to process payments, making them cheaper, faster, and more secure. Amazon also offers a solid cash back on any purchase made on Amazon.com, Whole Foods, and other various restaurants, gas stations, and drug stores.

With a lot of these tech giants, having such a wide variety of products and services, finance is only the latest frontier for big tech companies. With the surge in popularity of these products especially in the younger generation, more and more companies are taking notice and trying to innovate in this space as well. However, as companies continue to innovate new financial products and solutions, government regulation is quick to follow.

References:

https://www.ngpf.org/blog/question-of-the-day/qod-which-tech-company-would-you-trust-the-most-to-handle-your-finances-amazon-google-or-apple/

https://9to5mac.com/2019/08/02/apple-card-details/