Privacy Concerns for Smart Speakers

Privacy Concerns for Smart Speakers
By Anonymous | June 18, 2021

It is said that by the end of 2019, 60 million Americans own at least one smart speaker at home. Throughout the past 7 years, ever since the rise of Amazon’s Alexa in 2014, people have become more reliant on smart speakers to help out with mundane tasks such as answering your questions, making calls, or scheduling appointments without you opening your phone. With the same network, these smart devices are also able to connect to the core units of your home, such as the lighting, temperature, or even locks in your home. Although these devices have many benefits to everyday life, one can’t help but question some downsides, especially when it comes to privacy concerns.

For those that do own a smart speaker such as Google Home or Alexa, how many times have you noticed that the system will respond to your conversation without actually calling on it? If your next question is: are smart devices always listening, I am sorry to inform you, yes, they are always listening.

Google Home

Even though Google Home is always listening in, it is not always recording your every conversation. Most of the time, it is on standby mode waiting for you to activate it by saying “Hey Google” or “Okay Google. However, you may notice that many times, Google Home can accidentally get activated without you saying the activation phrases. This is because sometimes in your conversation you might say things that sound similar which might trigger your device to start recording the conversation.

Based on a study done by researchers at Northwestern University and Imperial College London, they found that Google Home mini exhibited 0.95 average activations per hour while playing The West Wing through triggers of a potential wake word. This high occurrence can be problematic when it comes to privacy concerns, especially in the form of information collection and surveillance. User’s aren’t consenting to being watched, listened, or recorded while having conversations in the scope of their own home and can often make them feel inhibited or creeped out.

Alexa

Alexa on the other hand, has also had their fair share of privacy invasions. In 2018, an Amazon customer in Germany was mistakenly sent about 1,700 audio files from someone else’s Echo, providing enough information to name and locate the unfortunate user and his girlfriend. They attributed this to a human error without having any other explanation. It is also revealed that the top five smart home device companies have been using human contractors to analyse a small percent of voice-assistant recordings. Although the recordings are anonymised, they often contain enough information to identify the user, especially when the information is regarding medical conditions or other private conversations.

How to secure the privacy of your smart speaker

Based on some tips from Spy-Fy, here are some steps you can take to secure your Google Home device. Other devices should have a similar process.

● Check to see what the device has just recorded by visiting the Assistant Activity page.

● If you ever accidentally activate the Google Home, just say “Hey Google that wasn’t for you” and the assistant will delete what was recorded.

● You can set up automatic data deletion on your account or tell your assistant “Hey Google, delete what I said this week”.

● Turn off the device if you are ever having a private conversation with someone. This will ensure that the data is not being recorded without your permission.

Some other tips could be to limit what the smart speaker is connected to incase of a data breach. Best option would be to separate the smart devices with other sensitive information by using another Wi-Fi network.

References:

1. https://routenote.com/blog/smart-speakers-are-the-new-big-tech-battle-and-big-pri vacy-debate/

2. https://www.securityinfowatch.com/residential-technologies/smart-home/article/21 213914/the-benefits-and-security-concerns-of-smart-speakers-in-2021

3. https://spy-fy.com/google-home-and-your-privacy/#:~:text=Smart%20devices%20li ke%20Google%20Home%20are%20always%20listening.,security%20of%20their% 20Google%20Home.

4. https://moniotrlab.ccis.neu.edu/smart-speakers-study-pets20/

5. https://www.theguardian.com/technology/2019/oct/09/alexa-are-you-invading-myprivacy- the-dark-side-of-our-voice-assistants

Life360

Life360
By Anonymous | June 18, 2021

Life360 is a location-sharing app that started in 2008 and has since accumulated 27million users. They claim their mission is to “help families better coordinate and stay protected” by allowing for open communication and complete transparency that allows for frictionless coordination when managing hectic schedules. However, the actual effect this app (and other tracking apps) has on families– especially parent-child relationships– goes far beyond coordination.

Life360 UI
Life360 UI

Safety
One of the main selling points of the app is its safety features. Life360 operates on a Freemium basis where it’s free to download and use, and users can pay extra to gain access to additional features. As a non-paying user, users have access to live location updates, driving speed, cell-phone battery levels, and saved places. Paying users have additional roadside assistance, driving alerts, crash detection, automatic SOS, and access to 30-day travel histories. These features appeal especially to parents who want to make sure their kids are staying safe and out of trouble. However, as one can imagine, it can also end up being overbearing as well. A parent may call out their child for speeding by looking at their maximum speed during the drive, when they were only speeding for a few seconds to overtake a slower car. Although the app provides features meant to increase safety, the excessiveness of these features may actually result in a false sense of security as children try to find ways around being surveilled. Kids may choose to just leave their phones behind when going out and end up in an emergency situation without a way to contact their parents. Parents end up outsourcing the responsibility of keeping their children safe without actually investing time and energy in creating a healthy dialog. Alternatively, there have also ben cases where kids secretly download the app onto their parents’ phones to notify them when their parents are coming home.

Life360 Payment Plans
Life360 Payment Plans

Invasion of Privacy
Children do need regular parental supervision, but there is a fine line between parental supervision and parental surveillance. Adults today are more active in managing their kids’ lives than ever before and despite the strong deference to parental freedoms and parental rights by our legal system, using Life360 to monitor kids this way may well be an invasion of privacy. In a regular setting, individuals are able to make choices about whether or not they want to use the app or turn on their location. However, in the parent-child context, children are often dependent on their parents and must do as asked. Realistically, as long as kids are living at home, there isn’t real privacy. Even when they’re college students, as long as they’re financially dependent on their parents, they don’t have the full freedom to choose.

Impact on Parent-Child relationships
Installing an app like Life360 on a child’s phone may impact their trust or ability to practice independence. The convenience and peace of mind that parents gain from being able just check the app whenever they want comes at the cost of communication with their child and important focus needed for building a real relationship. Children no longer get to experience the same freedoms their own parents had of just being “back by dark” and are instead pushed to follow schedules that always keep their parents in the loop. This kind of surveillance adds unnecessary stress where even if they aren’t doing anything harmful, kids are pressured to notify their parents about anything unexpected that comes up– stopping for ice cream, dropping things off at a friends’ house. The app’s presence leads kids to feel like they’re constantly being watched, even if their kids aren’t always monitoring. Even from the parents’ perspective, there are some things they would rather not know. For example, if the app reports to them that their child is speeding, it becomes difficult to ignore the piece of information they’ve received. The use of tracking-apps may also indicate a lack of faith in children and end up being very disheartening and discouraging. It can make children less likely to confide in their parents when problems arise outside of the app’s scope of detection.

Is There a Solution?
Life360 is a prime example of how there is always the possibility of well-intended tools to be misused or have unintended consequences. The availability of such a product has the power to shape parent behavior as parents who may not have previously thought such a product was necessary now feel like they should use them simply because it is now an option. They are likely to jump in with the idea that having safety measures is always better without fully understanding the possible repercussions of using the app. Additionally, the presence of so many features give parents the pressure to utilize and check all of them. A “crash detection” feature immediately causes parents to stress out and get anxious more than normal. The app can change people’s behaviors in ways that likely were never intended, adding stress to both parents’ and children’s lives. It can work well for adults who can make their own decisions about whether or when to use the app. They can ensure safety when walking home at night and easily share their location if lost or stranded. But when it comes to parent-child relationship, the dynamics of the relationship makes the use and consequences of the app complicated. This brings to question what kind of responsibilities the creators of these apps have. Or does it fall entirely to the user to make sure the app is used responsibly?

https://www.wired.com/story/life360-location-tracking-families/

https://www.life360.com/privacy_policy/https://www.theintell.com/news/20190809/tracking-apps-spark-debate-over-protection-and-privacy/1

Why Tracking Apps Are More Harmful Than Helpful (OPINION)

https://www.forbes.com/sites/thomasbrewster/2020/02/12/life360-comes-at-you-fast–cops-use-family-surveillance-app-to-trace-arson-suspect/?sh=5518dbd5380a

A New Generation of Phone Hackers: The Police

A New Generation of Phone Hackers: The Police
By Anonymous | June 18, 2021

Hackers. I challenge you to create an image of the “prototypical hacker.” What comes to mind? Is it a recluse with a noticeably worn hoodie, sitting alone in the dark hovering over a desktop?

The stated description may have been quite popular at one time, but the constant changes in technology are coupled with the evolution of those who constitute a “hacker.” One group in particular is becoming increasingly associated with this title and emerging into the spotlight: law enforcement.

A cartoon of a police officer chasing the image of someone created off of popular iPhone apps. Boris Séméniako

A report by Upturn has found that more than 2,000 agencies in all 50 states of the U.S. have purchased tools to get into locked, encrypted phones and extract their data. Reports by the researchers at Upturn suggest U.S. authorities have searched 100,000+ phones over the past 5 years. The Department of Justice argues that encrypted digital communications hinder investigations and for protections to exist, there must be a “back door” for law enforcement. Google and Apple have not complied to these requests, but agencies have found the tools needed to hack into suspects’ phones. The use of these tools is justified by its need to aid in investigating serious offenses such as: homicide, child exploitation, and sexual violence.

In July 2020, police in Europe made hundreds of arrests as the result of hacking into an encrypted app called EncroChat. EncroChat is a phone network that provides specially altered phones–no camera, microphone, or GPS– with the ability to immediately erase compromising messages. Authorities in Europe hacked into these devices to detect criminal activity. The New York Times reports the police in the Netherlands were able to seize 22,000 pounds of cocaine, 154 pounds of heroin and 3,300 pounds of crystal methamphetamine as a result of the intercepted messages and conversations.

However, these tools are also being used for offenses that have little to no relationship to a mobile device. There are many logged offenses in the United States which are not digital in nature such as public intoxication, marijuana possession, and graffiti. It is difficult to understand why hacking into a mobile device– an extremely invasive investigative technique– would be necessary for these types of alleged offenses. The findings from the Upturn report suggest many police departments can tap into highly personal and private data with little oversight or transparency. Only half of 110 surveyed large U.S. law enforcement agencies have policies on handling data extracted from smartphones and merely 9 of these contained policies with substantive restrictions.

A worker checking the innards of an iPhone at an electronics repair store in New York City last month. Eduardo Munoz/Reuters

An important question on this issue remains: what happens to the extracted data after its use in a forensic report? There are few policies clearly defining the limits on how long extracted data may be retained. The lack of clarity and regulation surrounding this “digital evidence” limits the protection of most Americans. More importantly, if the data is extracted from the cloud, there are further challenges. Since law enforcement has access to tools for siphoning and collecting data from cloud-based accounts there is an immensely continuous database they are able to view. Some suggest this continuous flow of data should be treated as a wiretap and require a wiretap order. However, research from Upturn has not been able to find a local agency policy that provides guidance or control over data extracted from the cloud.

Undoubtedly, the ability to hack into phones has given police the necessary leads to making many arrests. However, the lack of regulation and general oversight on these actions can also questionably impede the safety of American citizens. Public institutions have often been thought to be behind with the use of the latest technologies. There are those who argue that if criminals are utilizing digital tools to commit offenses, then law enforcement should now be one step ahead with these technologies. This begs the question: is it fair or just for law enforcement to have the ability to hack into their citizens’ phones?

References:
Benner, K., Markoff, J., Perlroth, N. (2016, March). Apple’s New Challenge: Learning How the U.S. Cracked Its
iPhone. Retrieved from New York Times: https://www.nytimes.com/2016/03/30/technology/apples-new-challenge-learning-how-the-us-cracked-its-iphone.html

Koepke, L., Weil, E., Urmila, J., Dada, T., & Yu, H. (2020, October). Mass Extraction: The Widespread Power of U.S.
Law Enforcement to Search Mobile Phones. Retrieved from Upturn: https://www.upturn.org/reports/2020/mass-extraction/

Nicas Jack (2020, October). The Police Can Probably Break Into Your Phone
Retrieved from New York Times

Nossiter Adam (2020, July). When Police are Hackers: Hundreds Charged as Encrypted Network is Broken
Retrieved from New York Times:

Tinder Announces Potential Background Check Feature: What Could Possibly Go Wrong?

Tinder Announces Potential Background Check Feature: What Could Possibly Go Wrong?
By Anonymous | June 18, 2021

In March 2021, Tinder, along with other Match Group entities, publicly announced the decision to allow for its users to run immediate background checks on potential dating partners. The dating service plans to partner with Garbo, a non-profit, female-founded organization that specializes in checks using just their name and phone number and aims to prevent gender-based violence in the midst of an unfair justice system. According to its website, Garbo’s mission is to provide “more accountability, transparency, and access to information to proactively prevent crimes . We provide access to the records and reports that matter so individuals can make more informed decisions about their safety.” The platform provides these records at a substantially lower cost than those provided by for-profit corporations.

Though well-intentioned with ensuring safety and well-being of its users, this partnership of the companies raises questions when it comes to user protection measures and the implications of digital punishment. For one, the access to public records at a user’s disposal might cause concern, especially to those who may have inaccurate records attached to their name, and according to a Slate article highlighting the nature of online criminal records, getting them removed is a taxing process and that virtual footprint could tarnish an individual’s name for life. Public record data is generally error prone, and there would need to be accountability and transparency in how often data is updated and how representative it is of a general population, since it is highly probable that the data collection could disproportionately affect marginalized communities. Additionally, there is a possibility of aliases or misspellings used by offenders to bypass the consequences of being identified in a background check.

Garbo’s technology is still relatively new, and much information isn’t out there regarding the tactics of collecting criminal cases nor maintaining quality control when accessing a criminal database. Could a full name and phone number alone assist in an effective identity matching process? What are the precautions taken to ensure accurate information? How high would error rates be in this situation?  If there are false positives and misidentification, will Garbo hold themselves accountable? Moreover, the non-profit’s current privacy policy does not explicitly disclose information regarding the data that will be provided to a user if they were to request access to criminal records.

The types of crimes that would have the most penalty in the dating sphere are still yet to be confirmed by the Garbo management. So far, Garbo decided to not include drug possession, primarily in order to raise awareness of the racial inequities when it comes to such charges.

In addition, despite claims that Garbo provides checks at low costs, the collaboration with Tinder would imply increases in costs as it is a for-profit entity, so that promise of accessibility may be a distant one.  This initiative is a step in the right direction, but until the public can get more information on how Garbo will maintain accountability and be transparent about the origins of the data, we can only hope of a perfectly safe, regulated, and fair experience for users on these apps.  

References:

Technology’s Hidden Biases

Technology’s Hidden Biases
By Shalini Kunapuli | June 18, 2021

As a daughter of science fiction film enthusiasts, I grew up watching many futuristic movies including 2001: A Space Odyssey, Back to the Future, Her, Ex Machina, The Minority Report, and more recently Black Mirror. It was always fascinating to imagine what technology could look like in the future, it seemed like magic that would work at my every command. I wondered when society would reach that point in the future, where technology would aid us and exist alongside us. However, over the past few years I’ve realized that we are already living in that futuristic world. While we may not exactly be commuting to work in flying cars just yet, technology is deeply embedded in every aspect of our lives, and there are subtle evils to the fantasies of our technological world.

Coded Bias exists
We are constantly surrounded by technology – ranging from home assistants to surveillance technologies to health trackers. What a lot of people do not realize however is that many of these systems that are meant to serve us all are actually quite biased. Let’s take the story of Joy Buolamwini, from the documentary Coded Bias. Buolamwini, a PhD at the MIT Media Lab, noticed a major flaw in facial recognition software. As a Black woman, the software could not recognize her, however once she put a white mask on the software detected her. In general, “highly melanated” women have the lowest accuracy for being recognized by the system. As part of her research, Buolamwini discovered the data sets that the facial recognition software was trained on consisted mostly of white males. The people building the models are a certain demographic, and as a result they compile a dataset that looks primarily like them. In ways like this, bias is coded in the algorithms and models that are used.

Ex Machi-NO, there are more examples
The implications go far beyond the research walls at MIT. In London, the police intimidated and searched a 14 year old Black boy after using surveillance technology, only to realize later that the software had misidentified the boy. In the US, an algorithm meant to guide decision making in the health sector was created to predict which patients would need additional medical help in order to provide more tailored care. Even though the algorithm excluded race as a factor, it still resulted in prioritizing assistance to White patients over Black patients, even though the Black patients in the data were actually in more need.

Other minority groups are also negatively impacted by different technologies. Most notably, women tend to get the short end of the stick and have their accomplishments and experiences continually erased due to human history and gender politics. A book I read recently called “Invisible Women” by Caroline Criado Perez details several examples of gender data bias, some of which are so integrated into our normal lives that we do not usually think about it.

For example, women are 47% more likely to be seriously injured in a car crash. Why? Women are on average shorter than men, and thus tend to pull their seats more forward to reach the pedal on account of their on average shorter legs. However, this is not the “standard” car seating position. Even the airbag locations are in places that are typical for the size of an average male body. Crash test dummies are usually tested with male sized bodies as well, leading to higher risk for females in a car crash since they haven’t tested on female sized bodies.

Additionally, women are more likely to be misdiagnosed with a heart attack, because women don’t have the “typical” symptoms of a heart attack. Female pianists are more likely to suffer hand injuries because the piano key size is based around male handspan and the female handspan is smaller on average. The length of an average smartphone is 5.5 inches and is uncomfortable for many women because of their smaller handspans. Google Home is 70% more likely to recognize male speech, because it is trained on a male-dominated corpora of voice recordings.

The list goes on and on. All of these examples are centered around the “standard” being males. The “true norm” or the “baseline” condition is centered around male experiences. Furthermore, there is a lack of gender diversity within the tech field so the teams developing a majority of these technologies and algorithms are primarily male as well. This itself leads to gender data bias in different systems, because the teams building technologies implicitly focus on their own needs first without considering the needs of groups they may not have knowledge about.

Wherever there is data, there is bound to be an element of bias in it, because data is based on society and society in and of itself is inherently biased. The consequences of using biased data can compound upon existing issues. This unconscious bias in algorithms and models further widens the gap between different groups, causing more inequalities.

Back to the Past: Rethinking Models
Data doesn’t just describe the world nowadays, it is often used to shape it. There is more power and reliance being put on data and technology. The first step is recognizing that these systems are flawed. It may be easy to rely on a model especially when everything is a blackbox, as you can just get a quick and supposedly straightforward result out. We need to take a step back however and take the time to ask ourselves if we trust the result completely. Ultimately, the models are not making decisions that are ethical, they are only making decisions that are mathematical. This means that it is up to us as data scientists, as engineers, as humans, to realize that these models are biased because of the data that we provide to them. As a result of her research, Buolamwini started the Algorithmic Justice League to create laws and protect people’s rights. We can take a page out of her book and start to actively think about how the models we build or how the code we write has an effect on society. We need to advocate for more inclusive practices across the board, whether it is in schools, workplaces, hiring practices, government, etc. It is up to us to come up with solutions so we can protect the rights of groups that may not be able to protect themselves. We need to be voices of reason in a world where people rely more and more on technology everyday. Instead of dreaming about futuristic technology through movies, let us work together and build systems now for a more inclusive future — after all, we are living in our own version of a science fiction movie.

References
* https://www.wbur.org/artery/2020/11/18/documentary-coded-bias-review
* https://www.nytimes.com/2020/11/11/movies/coded-bias-review.html
* https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/
* https://www.mirror.co.uk/news/uk-news/everyday-gender-bias-makes-women-14195924
* https://www.netflix.com/title/81328723
* https://www.amazon.com/Invisible-Women-Data-World-Designed/dp/1419729071

The Ethicality of Forensic Databases

The Ethicality of Forensic Databases
By Anonymous | June 18, 2021

Forensic data has been used in monumental cases to determine killers and criminals of many unsolved crimes. One such case was the murder of a 16 year old girl in Kollum, a small village in the Netherlands back in 1999. Iraqi and Afghan residents who were asylum seekers were predominantly blamed for the murder, which increased racial and ethic tensions in the area. Years later, forensic evidence from a semen sample determined that the murderer in the case was of European descent. This was a landmark for the case and diminished the racial tensions at the time. Such cases of wrongfully accused individuals having their cases overturned to serial killers being identified years later have shown a positive light on forensic based databases. The particular database that aided in the Kollum case was the Y-chromosome Haplotype Reference Database (YHRD). Now, YHRD is predominantly used to solve sex crimes and paternity cases.

The YHRD is a research database that is used in both academic and criminal laboratories. However, the ethicality of the data present in the YHRD has been of concern. Over a thousand of the male profiles in the YHRD do not seem to have had any consent given in having their DNA in the database. Some of these profiles include members of the Uyughur population, a predominantly muslim population in China. This comes with heightened concern as increasing reports indicate that the Uyughur population is being exploited, with even potential ethnic cleansing by the Chinese government. The DNA may be forcefully collected by the Chinese government, which could be used in a detrimental manner against the Uyughurs.

The ethicality of forensic databases is not well regulated and needs to be discussed more in the limelight. Forensic data, such as DNA and fingerprints, are sensitive data that can implicate not only an individual but their family and descendants as well. The YHRD opens up a discussion on other databases as well that do not properly regulate consent and the dissemination of purposeful data. DNA data collected by police forces are usually highly secured and are only used during preliminary investigation. Once a particular case is finished, the data is typically erased. With the predominance of large databases like YHRD increasing, new rules and regulations must be kept in place to ensure both privacy and ethicality. Forensic data can be very beneficial in solving crimes, insighting evidence, and even connecting loved ones. However, forensic data can also be used by the government for malpractice and can implicate people and their relatives. Due to this, we should take a deeper look into large scale forensic databases and their ethicality.

References
https://www.nature.com/articles/d41586-021-01584-w
https://yhrd.org/

Digital Contact-Tracing: Privacy vs. Security

Digital Contact-Tracing: Privacy vs. Security
By Anonymous | May 28, 2021

Since the outbreak of COVID-19 in early 2020, dozens of countries around the world have employed contact-tracing apps in an attempt to identify people exposed to COVID-19 and stop onward transmission. In the United States, Google and Apple forged an unlikely partnership to develop an exposure notification system for both Android and iOS devices. While some countries like China have adopted a data-first approach in which large amounts of data of its citizen are collected at the cost of significant privacy intrusion, some countries such as the United States have taken a privacy-first approach, which protects citizens’ data but at the cost of extremely limited access for health officials and researchers. Thus, a lack of trust of technology companies has undermined the efficacy of digital contact-tracing efforts in the United States.

A Wide Spectrum of Digital Contact-Tracing Methods

There are various forms of digital contact-tracing with different levels of privacy. For example, the Chinese government surveys its citizens’ movements and locations through mandatory, color-coded QR codes based on whether they have COVID-19 symptoms either through self-reporting or contact tracing; a green QR code indicates free movement as long as they scan their smartphone app before accessing public spaces such as public transportation, retail and shopping centers, restaurants, and places of employment.

Other less privacy intrusive methods do not involve the monitoring of user location and movement. Specifically in the United States, Apple and Google launched a Bluetooth-based tracing platform that allows users to opt-in to share their data via Bluetooth Low Energy (BLE) transmissions and approved apps from health organizations. In this approach, app users’ smartphones exchange and record random Bluetooth keys transmitted by beacons when the users are near one another. An infected user may voluntarily input a positive diagnosis into the app, which will then use the list of Bluetooth keys that were associated with the infected user to identify and notify others with whom the user’s smartphone had been in close contact (see images below). Unlike GPS, BLE wouldn’t be able to track people’s physical location and their movement. Furthermore, because the app broadcasts an anonymous key that cycles every 15 minutes, the explicit identification of the phone’s user is never revealed. Even if a person shares that they’ve been infected, the app will only share the keys from the specific period in which they were contagious.

Privacy Implications

First and foremost, a centralized data collection approach means that all Bluetooth, geolocation and diagnosis information is compiled in a central system, usually instituted by public health authorities that may also share the data with 3rd party systems. With Google and Apple’s system, there is no centrally accessible master list of phones that have matched, contagious or otherwise, since the central servers only maintain the database of shared keys rather than the interactions between those keys. Furthermore, while Bluetooth-based apps collect only a random identifier/key from users, it may still be possible for a government agency or tech company to link metadata associated with the user’s Bluetooth identifier such as the smartphone IP address to the user’s identity and location.

Zero-Sum Game?

While digital contract tracing apps have had mixed success worldwide, low participation rates and privacy concerns have plagued the success of such endeavors in the United States. A central question surrounding this topic is whether people should sacrifice their privacy in exchange for security during crises such as the COVID-19 pandemic? In the United States, the response was an overwhelming no. A key reason is that people don’t trust tech companies or the government to collect, use, and store their personal data, especially their health and location information. Although user privacy and security were central to Apple and Google’s design, Americans were not convinced. For example, a survey conducted by Washington Post and the University of Maryland in April 2020 found that 50% of smartphone users wouldn’t use a contact-tracing app even if it promised to rely on anonymous tracking and reporting. Specifically, 56% cited that they did not trust the big tech companies to keep the data anonymous, while 43% wouldn’t trust public health agencies and universities. By June 2020, the level of mistrust had increased in which a new survey showed that 71% of respondents wouldn’t use contact tracing apps with privacy cited as the leading reason. Contrary to the privacy paradox argument, Americans refused to use these apps in large part due to privacy concerns.

So what takeaways can we learn to prepare for the next crisis or emergency? First and foremost, robust data protections are needed to maintain consumer trust and confidence in the marketplace. This means that clear standards and laws should enable the responsible use of data rather than a handcuff against big tech and government agencies. Additionally, state entities, lawmakers, and Americans routinely face confusion in navigating the complex and sometimes inconsistent privacy landscape. In aggregate, my conclusion is that the United States needs a set of baseline federal privacy laws that are enforceable and protect our personal information in good times and in times of crisis.

References