China’s Scary But Robust Surveillance System

China’s Scary But Robust Surveillance System
By Anonymous | June 18, 2021

Introducing the Problem

In 2014, the Chinese government introduced an idea that would allow them to keep track of their citizens and score their behavior. It seemed that the government wanted to have a world where their people were literally constantly monitored — whether it’s where people shop, how people are paying bills, and even the type of content they are watching. In many ways, it is what major companies in the US like Google, Facebook, etc. are doing with data collection, but on steroids and at least they are letting you know that your every move was being watched. On top of that, you are being judged and given a score based on your interactions and lifestyle. A high “citizen score” will grant people rewards such as faster internet service. Some instances where a person’s score may be decreased include posting on social media that contradicts the Chinese government. Private companies in China are constantly working with the government to gather data through social media and other behaviors on the internet.

A key potential issue is that the government will be technically capable of considering the behavior of a Chinese citizen’s friends and family in determining his or her score. For example, it is possible that your friend’s anti-government political post could lower your own score. There, this type of scoring mechanic can have implications on relationships between an individual’s friends and family. The Chinese government is taking this project seriously, and scores that one may take for granted in the US may be in jeopardy based on a person’s score. One example is accessing a visa to travel abroad or or even the right to travel by train or plane within the country. People understand the risks and dangers this poses and as one internet privacy expert says, “What China is doing here is selectively breeding its population to select against the trait of critical, independent thinking.” However, because lack of trust is a serious problem in China, many Chinese actually welcome this potential system. To relate this back to the US, I am wondering if this type of system could ever exist in our country and if so, what that would look like. Is it ethical for private companies to assist in massive surveillance and turn over their data to the government? Chinese companies are now required to assist in government spying while U.S. companies are not, but what happens when Amazon or Facebook are in the positions that Alibaba and Tencent are in now?

A key benefit for China to have so many cameras and surveillance set up throughout their major cities is that it helps to identify any criminals and helps with keeping track of crime. For example, in Chongqing, where there’s more surveillance cameras than any city in the world for its population, the surveillance system scans facial features of people on the streets from frames of video footage in real time. As a result, the scans can be compared against data that already exists within a police database, such as photos of criminals. If a match passes, typically 60% or higher, police officers are notified. One could make an argument for the massive surveillance system as being beneficial for society, but if law officials are not being transparent and enforcing good practices, then there is an issue.

References:

Got Venmo? Protect Your Privacy

Got Venmo? Protect Your Privacy
By Anonymous | June 18, 2021

Phones With Venmo

Last month, BuzzfeedNews discovered President Joe Biden’s Venmo account and his public friends list. President Joe Biden’s and First Lady Jill Biden’s Venmo accounts were removed online the day the news broke. This news prompted Venmo to implement a new feature that allows users to hide their friends list. However, the default option for a users’ friend list is public so users will be able to see others’ friends unless they manually select to hide the list. The incident with President Joe Biden’s Venmo account and Venmo’s new feature have reraised concerns about Venmo’s privacy. Here’s some answers to some commonly asked questions about Venmo and Venmo’s privacy policy.

What Data Does Venmo Collect?

Currently, according to its privacy policy, Venmo collects a host of personal data including your name, address, email, telephone number, information about what device you are using to access Venmo, financial information (your bank account information), SNN (or other governmental issued verification numbers), geolocation information (your location), and social media information if you decide to connect your Venmo account with social media such as Twitter, FourSquare, and Facebook.

When you register for a personal Venmo account, you must verify your phone address, your email, and your bank account.

Why Should You Care About Venmo’s Privacy?

A lot of Venmo users view Venmo as a fun social media platform where they can share their transactions with accompanying notes and descriptions. They figure they’re not doing anything wrong so why should they care if their transactions are public? They don’t have anything to hide. It is not just about hiding bad information, although this may be some users’ goal, but also protecting good information from others. What do I mean by this?

According to Venmo’s privacy policy, “public information may also be seen, accessed, reshared or downloaded through Venmo’s APIs or third-party services that integrate with our products” meaning that all of your public transactions and associated comments are available to the public. Even non Venmo users can discover your data by accessing the API.

In 2018, Mozilla Fellow Hang Do Thi Duc released “Public By Default”, an analysis of all 207,984,218 public Venmo transactions from 2017. Through these transactions, she was able to discover drug dealers, breakups, and the routine life of a married couple. She was able to discover such information as where the married couple shopped and what days they usually went to the grocery store, what gas they used, and what restaurants they usually ate at. She was able to discover a drug dealer and where he lived based on his public transaction comments and the fact that his Facebook was linked to his Venmo. Thus, Venmo transactions can act as a map of your daily activities. It can be quite easy to learn about an individual through both their transactions and friends list.

Example Venmo API Image
Image of Venmo API

Your personal data may become more publicly available if you connect your account to third parties such as social media platforms. According to Venmo’s privacy policy, data shared with a “third-party based on an account connection will be used and disclosed in accordance with the third-party’s privacy practices” and “may in turn be shared with certain other parties, including the general public, depending on the account’s or platform’s privacy practices.” This means that if you connect your account with a third party, Venmo and the third party will exchange personally identifiable information about you. The information Venmo shares about you with the third party is subject to the third party’s privacy meaning that data is no longer protected by Venmo’s privacy policy. If the third party’s privacy policy states that personal information can be shared publicly, private information you have shared with Venmo can then become public.

How Can You Protect Your Privacy?

You can protect your data by making both your transactions and friends list private. These are both automatically public. You can also make your past transactions private and prevent Venmo from collecting some of your location data by turning location services for Venmo off using your mobile device. Article for how to do both of these here.This should prevent anyone from publicly accessing your Venmo transactions or friends list and prevent some geolocation tracking although Venmo may still be able to view your location.

Venmo Privacy Settings
Venmo Privacy Settings

Also be sure to read a firm’s privacy policy before you decide to connect your account with them in any way. Before connecting with any social media apps, if you haven’t already, read the social media platform’s privacy policy to see if their privacy practices match with what you would feel comfortable sharing. If you’ve already connected with social media apps, be sure to read the privacy policies of other third parties that ask to connect with your account in the future.

You should also be cautious about your Venmo profile picture. You may figure if you regret a past Venmo profile picture, you can just delete this photo and post a new one. However, this is not the case. It is still possible to recover a user’s old Venmo profile picture after they have replaced it with a new one simply by changing the photo’s URL. Try to post photos that you do not mind being public for the foreseeable future.

In summary, privacy matters especially when it concerns financial data that reveals patterns about your lifestyle. Set your transactions and friends list to private, turn off location services, be wary of connecting your account to third parties, and post profile pictures that you do not mind being public.

References:

Duc Do Thi, Hang. (2018). Public By Default [Project]. https://publicbydefault.fyi/

How to Sign Up for a personal Venmo account. Venmo. (2021). https://help.venmo.com/hc/en-us/articles/209690068-How-to-Sign-Up-for-a-personal-Venmo-account.

Mac, R., McDonald, L., Notopoulos, K., & Brooks, R. (2021, May 15). We Found Joe Biden On Venmo. Here’s Why That’s A Privacy Nightmare For Everyone. BuzzFeed News. https://www.buzzfeednews.com/article/ryanmac/we-found-joe-bidens-secret-venmo.

Mozilla Foundation. (2019, August 28). Venmo, Are You Listening? Make User Privacy the Default. Mozilla . https://foundation.mozilla.org/en/blog/venmo-are-you-listening-make-user-privacy-default/

Notopoulos, K. (2021, May 19). Venmo Exposes All The Old Profile Photos You Thought Were Gone. BuzzFeed News. https://www.buzzfeednews.com/article/katienotopoulos/paypals-venmo-exposes-old-photos?ref=bfnsplash.

Payment Activity & Privacy. Venmo. (2021). https://help.venmo.com/hc/en-us/articles/210413717.

Perelli, A. (2021, May 30). Venmo added new privacy options after President Joe Biden’s account was discovered. Business Insider. https://www.businessinsider.in/tech/news/venmo-added-new-privacy-options-after-president-joe-bidens-account-was-discovered/articleshow/83074180.cms.

Photo Credits:

https://time.com/nextadvisor/credit-cards/venmo-guide/

https://publicbydefault.fyi/

https://mashable.com/article/venmo-cash-app-paypal-data-privacy/

 

Neurotechnologies, Privacy, and the need for a revised Belmont Report

Neurotechnologies, Privacy, and the need for a revised Belmont Report
By Filippos Papapolyzos | June 18, 2021

Ever since Elon Musk’s launch of his futuristic venture Neuralink in 2016, the popularity of the field of neurotechnology – which sits at the intersection of neuroscience and engineering – has skyrocketed. Neuralink’s goal has been to develop skull-implantable chips that interface one’s brain with their devices, a technology that the founder said, in 2017, is 8 to 10 years away. This technology would have tremendous potential for individuals with disabilities and brain-related disorders, such as by allowing speech-impaired individuals to regain their voice or paraplegics to control prosthetic limbs. The company’s founder, however, has largely focused on advertising a relatively more commercial version of the technology, that would allow everyday users to control their smartphones or even communicate with each other using just their thoughts. To this day, the project is still very far from reality and MIT Technology Review has dubbed it neuroscience theatre aimed at stirring excitement and attracting engineers while other experts have called it bad science fiction. Regardless of the eventual successfulness of brain-interfacing technologies, the truth remains that we are still very underprepared from a legal and ethical standpoint, due to their immense privacy implications as well as the philosophical questions they pose.

The Link
The Link

Implications for Privacy

Given that neurotechnologies would have direct real-time access to one’s most internal thoughts, many consider them to be the last frontier of privacy. All our thoughts, including our deepest secrets and perhaps even ideas that we are not consciously aware of, would be digitized and transmitted to our smart devices or the cloud for processing. Not unlike other data we casually share today, this data could be passed on to third parties such as advertisers and law enforcement. By processing and storing this information on the cloud, it would be exposed to all sorts of cybersecurity risks, that would put individuals’ most personal information and even dignity at risk. If data breaches today expose proxies of our thoughts – i.e the data we produce – data breaches on neural data would be exposing our innermost selves. Law enforcement could surveil and arrest individuals simply for thinking of committing a crime and malicious hackers could make us think and do things against our will or extort money for our thoughts.

A slippery slope argument that tends to be made in regards to such potential data sharing is that, in some ways, it already happens. Smartphones already function as extensions of our cognition and, through them, we tend to disclose all sorts of information through social media posts or apps that monitor our fitness, for instance. A key difference between neurotechnologies and smartphones, however, is the voluntariness of our data sharing today. A social media post, for instance, constitutes an action, i.e a thought that has been manifested, and is both consensual and voluntary in the clear majority of cases. Instagram may be processing our every like or engagement but we still maintain the option of not performing that action. Neuralink would be tapping into thoughts, which have not yet been manifested into action as we have not yet applied our decision making skills to judge the appropriateness of performing said action. Another key difference is the precision of these technologies. Neurotechnologies would not be humanity’s first attempt at influencing one’s course of action but they would definitely be the most refined. Lastly, the mechanics of how the human brain functions still remain vastly unexplored and what we call consciousness may simply be the tip of the iceberg. If Neuralink were to expose what lies underneath, we would likely not be positively surprised.

Brain Privacy
Brain Privacy

Challenges to The Belmont Report

Since 1976, The Belmont Report has been a milestone for ethics in research and is still largely consulted by ethical committees, such as Institutional Review Boards in the context of academic research, having set the principles of Respect for Persons, Beneficence, and Justice. With the rise of neurotechnologies, it is evident that these principles are not sufficient. The challenges brain-interfacing poses have led many experts to plead for a globally-coordinated effort to draft a new set of rules guiding neurotechnologies.

The principle of Respect for Persons is rooted in the idea that individuals act as autonomous agents, which is an essential requirement for informed consent to take place. Individuals with diminished autonomy, such as children or prisoners, are entitled to special protections to ensure  they are not taken advantage of. Neurotechnologies could, potentially, influence one’s thoughts and actions, thereby undermining their agency and, as a result, also their autonomy. The authenticity of any form of consent provided after an individual has interfaced with neurotechnology would be subject to doubt; we would not be able to judge the extent to which third parties might have participated in one’s decision making. 

From this point onwards, the Beneficence principle would also not be guaranteed. Human decision-making is not essentially rational, which may often lead to one’s detriment, but when it is voluntary it is seen as part of one’s self. When harm is not voluntary, it is inflicted. When consent is contested, autonomy is put under question. This means that any potential harm suffered by a person using brain-interfacing technologies could be seen as a product of said technologies and therefore as a form of inflicted harm. Since Beneficence is founded on the “do not harm” maxim, neurotechnologies pose a serious challenge to this principle.

Lastly, given that the private company would, realistically, most likely sell the technology at a high price point, it would be accessible only to those who can afford it. If the device were to augment users’ mental or physical capacities, this would severely magnify pre-existing inequalities on the basis of income as well as willingness to use the technology. This poses a challenge to the Justice principle as non-users could receive an asymmetric burden as a result of the benefits received by users.

In the frequently quoted masterpiece 1984, George Orwell speaks of a dystopian future where “nothing was your own except the few cubic centimeters inside your skull”. Will this future soon be past?

References

Can privacy coexist with technology that reads and changes brain activity?

https://news.columbia.edu/content/experts-call-ethics-rules-protect-privacy-free-will-brain-implants-and-ai-merge

https://www.cell.com/cell/fulltext/S0092-8674(16)31449-0

https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/index.html

https://www.csoonline.com/article/3429361/what-are-the-security-implications-of-elon-musks-neuralink.html

https://www.georgetowntech.org/publications-blog/2017/5/24/elon-musks-neuralink-privacy-protections-for-merged-biological-machine-intelligence-sa3lw

https://www.technologyreview.com/2020/08/30/1007786/elon-musks-neuralink-demo-update-neuroscience-theater/

https://www.sciencetimes.com/articles/31428/20210528/neuralink-brain-chip-will-end-language-five-10-years-elon.htm

https://www.theverge.com/2017/4/21/15370376/elon-musk-neuralink-brain-computer-ai-implant-neuroscience

https://www.inverse.com/science/neuralink-bad-sci-fi

Privacy Concerns for Smart Speakers

Privacy Concerns for Smart Speakers
By Anonymous | June 18, 2021

It is said that by the end of 2019, 60 million Americans own at least one smart speaker at home. Throughout the past 7 years, ever since the rise of Amazon’s Alexa in 2014, people have become more reliant on smart speakers to help out with mundane tasks such as answering your questions, making calls, or scheduling appointments without you opening your phone. With the same network, these smart devices are also able to connect to the core units of your home, such as the lighting, temperature, or even locks in your home. Although these devices have many benefits to everyday life, one can’t help but question some downsides, especially when it comes to privacy concerns.

For those that do own a smart speaker such as Google Home or Alexa, how many times have you noticed that the system will respond to your conversation without actually calling on it? If your next question is: are smart devices always listening, I am sorry to inform you, yes, they are always listening.

Google Home

Even though Google Home is always listening in, it is not always recording your every conversation. Most of the time, it is on standby mode waiting for you to activate it by saying “Hey Google” or “Okay Google. However, you may notice that many times, Google Home can accidentally get activated without you saying the activation phrases. This is because sometimes in your conversation you might say things that sound similar which might trigger your device to start recording the conversation.

Based on a study done by researchers at Northwestern University and Imperial College London, they found that Google Home mini exhibited 0.95 average activations per hour while playing The West Wing through triggers of a potential wake word. This high occurrence can be problematic when it comes to privacy concerns, especially in the form of information collection and surveillance. User’s aren’t consenting to being watched, listened, or recorded while having conversations in the scope of their own home and can often make them feel inhibited or creeped out.

Alexa

Alexa on the other hand, has also had their fair share of privacy invasions. In 2018, an Amazon customer in Germany was mistakenly sent about 1,700 audio files from someone else’s Echo, providing enough information to name and locate the unfortunate user and his girlfriend. They attributed this to a human error without having any other explanation. It is also revealed that the top five smart home device companies have been using human contractors to analyse a small percent of voice-assistant recordings. Although the recordings are anonymised, they often contain enough information to identify the user, especially when the information is regarding medical conditions or other private conversations.

How to secure the privacy of your smart speaker

Based on some tips from Spy-Fy, here are some steps you can take to secure your Google Home device. Other devices should have a similar process.

● Check to see what the device has just recorded by visiting the Assistant Activity page.

● If you ever accidentally activate the Google Home, just say “Hey Google that wasn’t for you” and the assistant will delete what was recorded.

● You can set up automatic data deletion on your account or tell your assistant “Hey Google, delete what I said this week”.

● Turn off the device if you are ever having a private conversation with someone. This will ensure that the data is not being recorded without your permission.

Some other tips could be to limit what the smart speaker is connected to incase of a data breach. Best option would be to separate the smart devices with other sensitive information by using another Wi-Fi network.

References:

1. https://routenote.com/blog/smart-speakers-are-the-new-big-tech-battle-and-big-pri vacy-debate/

2. https://www.securityinfowatch.com/residential-technologies/smart-home/article/21 213914/the-benefits-and-security-concerns-of-smart-speakers-in-2021

3. https://spy-fy.com/google-home-and-your-privacy/#:~:text=Smart%20devices%20li ke%20Google%20Home%20are%20always%20listening.,security%20of%20their% 20Google%20Home.

4. https://moniotrlab.ccis.neu.edu/smart-speakers-study-pets20/

5. https://www.theguardian.com/technology/2019/oct/09/alexa-are-you-invading-myprivacy- the-dark-side-of-our-voice-assistants

Life360

Life360
By Anonymous | June 18, 2021

Life360 is a location-sharing app that started in 2008 and has since accumulated 27million users. They claim their mission is to “help families better coordinate and stay protected” by allowing for open communication and complete transparency that allows for frictionless coordination when managing hectic schedules. However, the actual effect this app (and other tracking apps) has on families– especially parent-child relationships– goes far beyond coordination.

Life360 UI
Life360 UI

Safety
One of the main selling points of the app is its safety features. Life360 operates on a Freemium basis where it’s free to download and use, and users can pay extra to gain access to additional features. As a non-paying user, users have access to live location updates, driving speed, cell-phone battery levels, and saved places. Paying users have additional roadside assistance, driving alerts, crash detection, automatic SOS, and access to 30-day travel histories. These features appeal especially to parents who want to make sure their kids are staying safe and out of trouble. However, as one can imagine, it can also end up being overbearing as well. A parent may call out their child for speeding by looking at their maximum speed during the drive, when they were only speeding for a few seconds to overtake a slower car. Although the app provides features meant to increase safety, the excessiveness of these features may actually result in a false sense of security as children try to find ways around being surveilled. Kids may choose to just leave their phones behind when going out and end up in an emergency situation without a way to contact their parents. Parents end up outsourcing the responsibility of keeping their children safe without actually investing time and energy in creating a healthy dialog. Alternatively, there have also ben cases where kids secretly download the app onto their parents’ phones to notify them when their parents are coming home.

Life360 Payment Plans
Life360 Payment Plans

Invasion of Privacy
Children do need regular parental supervision, but there is a fine line between parental supervision and parental surveillance. Adults today are more active in managing their kids’ lives than ever before and despite the strong deference to parental freedoms and parental rights by our legal system, using Life360 to monitor kids this way may well be an invasion of privacy. In a regular setting, individuals are able to make choices about whether or not they want to use the app or turn on their location. However, in the parent-child context, children are often dependent on their parents and must do as asked. Realistically, as long as kids are living at home, there isn’t real privacy. Even when they’re college students, as long as they’re financially dependent on their parents, they don’t have the full freedom to choose.

Impact on Parent-Child relationships
Installing an app like Life360 on a child’s phone may impact their trust or ability to practice independence. The convenience and peace of mind that parents gain from being able just check the app whenever they want comes at the cost of communication with their child and important focus needed for building a real relationship. Children no longer get to experience the same freedoms their own parents had of just being “back by dark” and are instead pushed to follow schedules that always keep their parents in the loop. This kind of surveillance adds unnecessary stress where even if they aren’t doing anything harmful, kids are pressured to notify their parents about anything unexpected that comes up– stopping for ice cream, dropping things off at a friends’ house. The app’s presence leads kids to feel like they’re constantly being watched, even if their kids aren’t always monitoring. Even from the parents’ perspective, there are some things they would rather not know. For example, if the app reports to them that their child is speeding, it becomes difficult to ignore the piece of information they’ve received. The use of tracking-apps may also indicate a lack of faith in children and end up being very disheartening and discouraging. It can make children less likely to confide in their parents when problems arise outside of the app’s scope of detection.

Is There a Solution?
Life360 is a prime example of how there is always the possibility of well-intended tools to be misused or have unintended consequences. The availability of such a product has the power to shape parent behavior as parents who may not have previously thought such a product was necessary now feel like they should use them simply because it is now an option. They are likely to jump in with the idea that having safety measures is always better without fully understanding the possible repercussions of using the app. Additionally, the presence of so many features give parents the pressure to utilize and check all of them. A “crash detection” feature immediately causes parents to stress out and get anxious more than normal. The app can change people’s behaviors in ways that likely were never intended, adding stress to both parents’ and children’s lives. It can work well for adults who can make their own decisions about whether or when to use the app. They can ensure safety when walking home at night and easily share their location if lost or stranded. But when it comes to parent-child relationship, the dynamics of the relationship makes the use and consequences of the app complicated. This brings to question what kind of responsibilities the creators of these apps have. Or does it fall entirely to the user to make sure the app is used responsibly?

https://www.wired.com/story/life360-location-tracking-families/

https://www.life360.com/privacy_policy/https://www.theintell.com/news/20190809/tracking-apps-spark-debate-over-protection-and-privacy/1

Why Tracking Apps Are More Harmful Than Helpful (OPINION)

https://www.forbes.com/sites/thomasbrewster/2020/02/12/life360-comes-at-you-fast–cops-use-family-surveillance-app-to-trace-arson-suspect/?sh=5518dbd5380a

A New Generation of Phone Hackers: The Police

A New Generation of Phone Hackers: The Police
By Anonymous | June 18, 2021

Hackers. I challenge you to create an image of the “prototypical hacker.” What comes to mind? Is it a recluse with a noticeably worn hoodie, sitting alone in the dark hovering over a desktop?

The stated description may have been quite popular at one time, but the constant changes in technology are coupled with the evolution of those who constitute a “hacker.” One group in particular is becoming increasingly associated with this title and emerging into the spotlight: law enforcement.

A cartoon of a police officer chasing the image of someone created off of popular iPhone apps. Boris Séméniako

A report by Upturn has found that more than 2,000 agencies in all 50 states of the U.S. have purchased tools to get into locked, encrypted phones and extract their data. Reports by the researchers at Upturn suggest U.S. authorities have searched 100,000+ phones over the past 5 years. The Department of Justice argues that encrypted digital communications hinder investigations and for protections to exist, there must be a “back door” for law enforcement. Google and Apple have not complied to these requests, but agencies have found the tools needed to hack into suspects’ phones. The use of these tools is justified by its need to aid in investigating serious offenses such as: homicide, child exploitation, and sexual violence.

In July 2020, police in Europe made hundreds of arrests as the result of hacking into an encrypted app called EncroChat. EncroChat is a phone network that provides specially altered phones–no camera, microphone, or GPS– with the ability to immediately erase compromising messages. Authorities in Europe hacked into these devices to detect criminal activity. The New York Times reports the police in the Netherlands were able to seize 22,000 pounds of cocaine, 154 pounds of heroin and 3,300 pounds of crystal methamphetamine as a result of the intercepted messages and conversations.

However, these tools are also being used for offenses that have little to no relationship to a mobile device. There are many logged offenses in the United States which are not digital in nature such as public intoxication, marijuana possession, and graffiti. It is difficult to understand why hacking into a mobile device– an extremely invasive investigative technique– would be necessary for these types of alleged offenses. The findings from the Upturn report suggest many police departments can tap into highly personal and private data with little oversight or transparency. Only half of 110 surveyed large U.S. law enforcement agencies have policies on handling data extracted from smartphones and merely 9 of these contained policies with substantive restrictions.

A worker checking the innards of an iPhone at an electronics repair store in New York City last month. Eduardo Munoz/Reuters

An important question on this issue remains: what happens to the extracted data after its use in a forensic report? There are few policies clearly defining the limits on how long extracted data may be retained. The lack of clarity and regulation surrounding this “digital evidence” limits the protection of most Americans. More importantly, if the data is extracted from the cloud, there are further challenges. Since law enforcement has access to tools for siphoning and collecting data from cloud-based accounts there is an immensely continuous database they are able to view. Some suggest this continuous flow of data should be treated as a wiretap and require a wiretap order. However, research from Upturn has not been able to find a local agency policy that provides guidance or control over data extracted from the cloud.

Undoubtedly, the ability to hack into phones has given police the necessary leads to making many arrests. However, the lack of regulation and general oversight on these actions can also questionably impede the safety of American citizens. Public institutions have often been thought to be behind with the use of the latest technologies. There are those who argue that if criminals are utilizing digital tools to commit offenses, then law enforcement should now be one step ahead with these technologies. This begs the question: is it fair or just for law enforcement to have the ability to hack into their citizens’ phones?

References:
Benner, K., Markoff, J., Perlroth, N. (2016, March). Apple’s New Challenge: Learning How the U.S. Cracked Its
iPhone. Retrieved from New York Times: https://www.nytimes.com/2016/03/30/technology/apples-new-challenge-learning-how-the-us-cracked-its-iphone.html

Koepke, L., Weil, E., Urmila, J., Dada, T., & Yu, H. (2020, October). Mass Extraction: The Widespread Power of U.S.
Law Enforcement to Search Mobile Phones. Retrieved from Upturn: https://www.upturn.org/reports/2020/mass-extraction/

Nicas Jack (2020, October). The Police Can Probably Break Into Your Phone
Retrieved from New York Times

Nossiter Adam (2020, July). When Police are Hackers: Hundreds Charged as Encrypted Network is Broken
Retrieved from New York Times:

Tinder Announces Potential Background Check Feature: What Could Possibly Go Wrong?

Tinder Announces Potential Background Check Feature: What Could Possibly Go Wrong?
By Anonymous | June 18, 2021

In March 2021, Tinder, along with other Match Group entities, publicly announced the decision to allow for its users to run immediate background checks on potential dating partners. The dating service plans to partner with Garbo, a non-profit, female-founded organization that specializes in checks using just their name and phone number and aims to prevent gender-based violence in the midst of an unfair justice system. According to its website, Garbo’s mission is to provide “more accountability, transparency, and access to information to proactively prevent crimes . We provide access to the records and reports that matter so individuals can make more informed decisions about their safety.” The platform provides these records at a substantially lower cost than those provided by for-profit corporations.

Though well-intentioned with ensuring safety and well-being of its users, this partnership of the companies raises questions when it comes to user protection measures and the implications of digital punishment. For one, the access to public records at a user’s disposal might cause concern, especially to those who may have inaccurate records attached to their name, and according to a Slate article highlighting the nature of online criminal records, getting them removed is a taxing process and that virtual footprint could tarnish an individual’s name for life. Public record data is generally error prone, and there would need to be accountability and transparency in how often data is updated and how representative it is of a general population, since it is highly probable that the data collection could disproportionately affect marginalized communities. Additionally, there is a possibility of aliases or misspellings used by offenders to bypass the consequences of being identified in a background check.

Garbo’s technology is still relatively new, and much information isn’t out there regarding the tactics of collecting criminal cases nor maintaining quality control when accessing a criminal database. Could a full name and phone number alone assist in an effective identity matching process? What are the precautions taken to ensure accurate information? How high would error rates be in this situation?  If there are false positives and misidentification, will Garbo hold themselves accountable? Moreover, the non-profit’s current privacy policy does not explicitly disclose information regarding the data that will be provided to a user if they were to request access to criminal records.

The types of crimes that would have the most penalty in the dating sphere are still yet to be confirmed by the Garbo management. So far, Garbo decided to not include drug possession, primarily in order to raise awareness of the racial inequities when it comes to such charges.

In addition, despite claims that Garbo provides checks at low costs, the collaboration with Tinder would imply increases in costs as it is a for-profit entity, so that promise of accessibility may be a distant one.  This initiative is a step in the right direction, but until the public can get more information on how Garbo will maintain accountability and be transparent about the origins of the data, we can only hope of a perfectly safe, regulated, and fair experience for users on these apps.  

References:

Technology’s Hidden Biases

Technology’s Hidden Biases
By Shalini Kunapuli | June 18, 2021

As a daughter of science fiction film enthusiasts, I grew up watching many futuristic movies including 2001: A Space Odyssey, Back to the Future, Her, Ex Machina, The Minority Report, and more recently Black Mirror. It was always fascinating to imagine what technology could look like in the future, it seemed like magic that would work at my every command. I wondered when society would reach that point in the future, where technology would aid us and exist alongside us. However, over the past few years I’ve realized that we are already living in that futuristic world. While we may not exactly be commuting to work in flying cars just yet, technology is deeply embedded in every aspect of our lives, and there are subtle evils to the fantasies of our technological world.

Coded Bias exists
We are constantly surrounded by technology – ranging from home assistants to surveillance technologies to health trackers. What a lot of people do not realize however is that many of these systems that are meant to serve us all are actually quite biased. Let’s take the story of Joy Buolamwini, from the documentary Coded Bias. Buolamwini, a PhD at the MIT Media Lab, noticed a major flaw in facial recognition software. As a Black woman, the software could not recognize her, however once she put a white mask on the software detected her. In general, “highly melanated” women have the lowest accuracy for being recognized by the system. As part of her research, Buolamwini discovered the data sets that the facial recognition software was trained on consisted mostly of white males. The people building the models are a certain demographic, and as a result they compile a dataset that looks primarily like them. In ways like this, bias is coded in the algorithms and models that are used.

Ex Machi-NO, there are more examples
The implications go far beyond the research walls at MIT. In London, the police intimidated and searched a 14 year old Black boy after using surveillance technology, only to realize later that the software had misidentified the boy. In the US, an algorithm meant to guide decision making in the health sector was created to predict which patients would need additional medical help in order to provide more tailored care. Even though the algorithm excluded race as a factor, it still resulted in prioritizing assistance to White patients over Black patients, even though the Black patients in the data were actually in more need.

Other minority groups are also negatively impacted by different technologies. Most notably, women tend to get the short end of the stick and have their accomplishments and experiences continually erased due to human history and gender politics. A book I read recently called “Invisible Women” by Caroline Criado Perez details several examples of gender data bias, some of which are so integrated into our normal lives that we do not usually think about it.

For example, women are 47% more likely to be seriously injured in a car crash. Why? Women are on average shorter than men, and thus tend to pull their seats more forward to reach the pedal on account of their on average shorter legs. However, this is not the “standard” car seating position. Even the airbag locations are in places that are typical for the size of an average male body. Crash test dummies are usually tested with male sized bodies as well, leading to higher risk for females in a car crash since they haven’t tested on female sized bodies.

Additionally, women are more likely to be misdiagnosed with a heart attack, because women don’t have the “typical” symptoms of a heart attack. Female pianists are more likely to suffer hand injuries because the piano key size is based around male handspan and the female handspan is smaller on average. The length of an average smartphone is 5.5 inches and is uncomfortable for many women because of their smaller handspans. Google Home is 70% more likely to recognize male speech, because it is trained on a male-dominated corpora of voice recordings.

The list goes on and on. All of these examples are centered around the “standard” being males. The “true norm” or the “baseline” condition is centered around male experiences. Furthermore, there is a lack of gender diversity within the tech field so the teams developing a majority of these technologies and algorithms are primarily male as well. This itself leads to gender data bias in different systems, because the teams building technologies implicitly focus on their own needs first without considering the needs of groups they may not have knowledge about.

Wherever there is data, there is bound to be an element of bias in it, because data is based on society and society in and of itself is inherently biased. The consequences of using biased data can compound upon existing issues. This unconscious bias in algorithms and models further widens the gap between different groups, causing more inequalities.

Back to the Past: Rethinking Models
Data doesn’t just describe the world nowadays, it is often used to shape it. There is more power and reliance being put on data and technology. The first step is recognizing that these systems are flawed. It may be easy to rely on a model especially when everything is a blackbox, as you can just get a quick and supposedly straightforward result out. We need to take a step back however and take the time to ask ourselves if we trust the result completely. Ultimately, the models are not making decisions that are ethical, they are only making decisions that are mathematical. This means that it is up to us as data scientists, as engineers, as humans, to realize that these models are biased because of the data that we provide to them. As a result of her research, Buolamwini started the Algorithmic Justice League to create laws and protect people’s rights. We can take a page out of her book and start to actively think about how the models we build or how the code we write has an effect on society. We need to advocate for more inclusive practices across the board, whether it is in schools, workplaces, hiring practices, government, etc. It is up to us to come up with solutions so we can protect the rights of groups that may not be able to protect themselves. We need to be voices of reason in a world where people rely more and more on technology everyday. Instead of dreaming about futuristic technology through movies, let us work together and build systems now for a more inclusive future — after all, we are living in our own version of a science fiction movie.

References
* https://www.wbur.org/artery/2020/11/18/documentary-coded-bias-review
* https://www.nytimes.com/2020/11/11/movies/coded-bias-review.html
* https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/
* https://www.mirror.co.uk/news/uk-news/everyday-gender-bias-makes-women-14195924
* https://www.netflix.com/title/81328723
* https://www.amazon.com/Invisible-Women-Data-World-Designed/dp/1419729071

The Ethicality of Forensic Databases

The Ethicality of Forensic Databases
By Anonymous | June 18, 2021

Forensic data has been used in monumental cases to determine killers and criminals of many unsolved crimes. One such case was the murder of a 16 year old girl in Kollum, a small village in the Netherlands back in 1999. Iraqi and Afghan residents who were asylum seekers were predominantly blamed for the murder, which increased racial and ethic tensions in the area. Years later, forensic evidence from a semen sample determined that the murderer in the case was of European descent. This was a landmark for the case and diminished the racial tensions at the time. Such cases of wrongfully accused individuals having their cases overturned to serial killers being identified years later have shown a positive light on forensic based databases. The particular database that aided in the Kollum case was the Y-chromosome Haplotype Reference Database (YHRD). Now, YHRD is predominantly used to solve sex crimes and paternity cases.

The YHRD is a research database that is used in both academic and criminal laboratories. However, the ethicality of the data present in the YHRD has been of concern. Over a thousand of the male profiles in the YHRD do not seem to have had any consent given in having their DNA in the database. Some of these profiles include members of the Uyughur population, a predominantly muslim population in China. This comes with heightened concern as increasing reports indicate that the Uyughur population is being exploited, with even potential ethnic cleansing by the Chinese government. The DNA may be forcefully collected by the Chinese government, which could be used in a detrimental manner against the Uyughurs.

The ethicality of forensic databases is not well regulated and needs to be discussed more in the limelight. Forensic data, such as DNA and fingerprints, are sensitive data that can implicate not only an individual but their family and descendants as well. The YHRD opens up a discussion on other databases as well that do not properly regulate consent and the dissemination of purposeful data. DNA data collected by police forces are usually highly secured and are only used during preliminary investigation. Once a particular case is finished, the data is typically erased. With the predominance of large databases like YHRD increasing, new rules and regulations must be kept in place to ensure both privacy and ethicality. Forensic data can be very beneficial in solving crimes, insighting evidence, and even connecting loved ones. However, forensic data can also be used by the government for malpractice and can implicate people and their relatives. Due to this, we should take a deeper look into large scale forensic databases and their ethicality.

References
https://www.nature.com/articles/d41586-021-01584-w
https://yhrd.org/

Digital Contact-Tracing: Privacy vs. Security

Digital Contact-Tracing: Privacy vs. Security
By Anonymous | May 28, 2021

Since the outbreak of COVID-19 in early 2020, dozens of countries around the world have employed contact-tracing apps in an attempt to identify people exposed to COVID-19 and stop onward transmission. In the United States, Google and Apple forged an unlikely partnership to develop an exposure notification system for both Android and iOS devices. While some countries like China have adopted a data-first approach in which large amounts of data of its citizen are collected at the cost of significant privacy intrusion, some countries such as the United States have taken a privacy-first approach, which protects citizens’ data but at the cost of extremely limited access for health officials and researchers. Thus, a lack of trust of technology companies has undermined the efficacy of digital contact-tracing efforts in the United States.

A Wide Spectrum of Digital Contact-Tracing Methods

There are various forms of digital contact-tracing with different levels of privacy. For example, the Chinese government surveys its citizens’ movements and locations through mandatory, color-coded QR codes based on whether they have COVID-19 symptoms either through self-reporting or contact tracing; a green QR code indicates free movement as long as they scan their smartphone app before accessing public spaces such as public transportation, retail and shopping centers, restaurants, and places of employment.

Other less privacy intrusive methods do not involve the monitoring of user location and movement. Specifically in the United States, Apple and Google launched a Bluetooth-based tracing platform that allows users to opt-in to share their data via Bluetooth Low Energy (BLE) transmissions and approved apps from health organizations. In this approach, app users’ smartphones exchange and record random Bluetooth keys transmitted by beacons when the users are near one another. An infected user may voluntarily input a positive diagnosis into the app, which will then use the list of Bluetooth keys that were associated with the infected user to identify and notify others with whom the user’s smartphone had been in close contact (see images below). Unlike GPS, BLE wouldn’t be able to track people’s physical location and their movement. Furthermore, because the app broadcasts an anonymous key that cycles every 15 minutes, the explicit identification of the phone’s user is never revealed. Even if a person shares that they’ve been infected, the app will only share the keys from the specific period in which they were contagious.

Privacy Implications

First and foremost, a centralized data collection approach means that all Bluetooth, geolocation and diagnosis information is compiled in a central system, usually instituted by public health authorities that may also share the data with 3rd party systems. With Google and Apple’s system, there is no centrally accessible master list of phones that have matched, contagious or otherwise, since the central servers only maintain the database of shared keys rather than the interactions between those keys. Furthermore, while Bluetooth-based apps collect only a random identifier/key from users, it may still be possible for a government agency or tech company to link metadata associated with the user’s Bluetooth identifier such as the smartphone IP address to the user’s identity and location.

Zero-Sum Game?

While digital contract tracing apps have had mixed success worldwide, low participation rates and privacy concerns have plagued the success of such endeavors in the United States. A central question surrounding this topic is whether people should sacrifice their privacy in exchange for security during crises such as the COVID-19 pandemic? In the United States, the response was an overwhelming no. A key reason is that people don’t trust tech companies or the government to collect, use, and store their personal data, especially their health and location information. Although user privacy and security were central to Apple and Google’s design, Americans were not convinced. For example, a survey conducted by Washington Post and the University of Maryland in April 2020 found that 50% of smartphone users wouldn’t use a contact-tracing app even if it promised to rely on anonymous tracking and reporting. Specifically, 56% cited that they did not trust the big tech companies to keep the data anonymous, while 43% wouldn’t trust public health agencies and universities. By June 2020, the level of mistrust had increased in which a new survey showed that 71% of respondents wouldn’t use contact tracing apps with privacy cited as the leading reason. Contrary to the privacy paradox argument, Americans refused to use these apps in large part due to privacy concerns.

So what takeaways can we learn to prepare for the next crisis or emergency? First and foremost, robust data protections are needed to maintain consumer trust and confidence in the marketplace. This means that clear standards and laws should enable the responsible use of data rather than a handcuff against big tech and government agencies. Additionally, state entities, lawmakers, and Americans routinely face confusion in navigating the complex and sometimes inconsistent privacy landscape. In aggregate, my conclusion is that the United States needs a set of baseline federal privacy laws that are enforceable and protect our personal information in good times and in times of crisis.

References