Are we truly anonymous in public health databases?

Are we truly anonymous in public health databases?
By Anonymous | June 18, 2021

Source: Iron Mountain

Privacy in healthcare data seems to be the baseline when speaking of personal data and privacy issues for many users, even for those who hold a more relaxing attitude toward privacy policy issues. We may think that social media, tech, and financial companies could be selling user data for profit, but people tend to trust hospitals, healthcare institutions, and pharmaceutical companies for at least trying to keep their users’ data safe. Is that really true? How safe is our healthcare data? Can we really be anonymous in the public databases?

As a user and as a patient, we have to share a lot of personal and sensitive information when we see a doctor or healthcare practitioner in order for them to provide precise and useful healthcare services to us. Doctors might know about our blood type, potential genetic risks or diseases in our family, pregnancy experiences, etc. Not only those, the health institutions behind doctors also keep records of our insurance information, home address, zip code, payment information. Healthcare institutions might hold more comprehensive sensitive and private information about you than any of those organizations who also try to retain as much information about you.

What kind of information healthcare providers collect and share with third parties? In fact, most of the healthcare providers should follow HIPAA’s privacy policy guidance. For example, I noticed that Sutter Health said they follow or refer to the HIPAA privacy in their user agreement.

For example, Sutter’s privacy policy talked about its usage of your healthcare. In Sutter’s privacy policy, it is stated that “We can use your health information and share it with other professionals who are treating you. We may use your health information to provide you with medical care in our facilities or in your home. We may also share your health information with others who provide care to you such as hospitals, nursing homes, doctors, nurses or others involved in your care.” Those usage seem reasonable to me.

In addition to above expected usage of your healthcare data, Sutter also mentioned that they are allowed to share your information in “ways to contribute to public good, such as public health and research”. Those ways include, “preventing disease, helping with product recalls, reporting adverse reactions on medications, reported suspected abuse, …”. One concern arising from one of the usage — public health and research, can we really be anonymous in the public database?

In fact the answer is no. Most of the healthcare records can be de-anonymized through information matching ! “Only a very small amount of data is needed to uniquely identify an individual. Sixty three percent of the population can be uniquely identified by the combination of their gender, date of birth, and ZIP code alone”, according to a post on Georgetown Law Technology Review published in April 2017. Thus, it is totally possible for both people with good intentions such as a research team and data scientists whose true intention is to provide public good, or people with bad intention, such as hackers, to legally or illegally get healthcare information from multiple sources and aggregate together. And in fact they can be-anonymous the data, especially with the help of current computing resources, algorithms, and machine learning.

So do companies that hold your healthcare information have to follow some kind of privacy framework? Are there laws out there to regulate companies who have your sensitive healthcare information and protect the vulnerable public like you and me? One guidance that most healthcare providers should follow is the Health Insurance Portability and Accountability Act (HIPAA), which became effective in 1996. This act stated who has rights to access what kind of health information, what information is protected, and how information should be protected. It also covered who must follow these laws, including Health plans, most healthcare providers, and health care clearinghouses, and health insurance companies. Companies that could have your health information and do not have to follow these laws include life insurers, most of the schools and school districts, state agencies like child protective services agencies, law enforcement agencies, and municipal offices.

Normal people like you and me are vulnerable individuals. We don’t have the knowledge, patience, and knowledge to understand every term stated in the long and full-of-law-jargon user agreement and privacy policies. But what we can do and should do is advocate for strong protection for our personal information, especially sensitive healthcare data. And government and policy makers should also establish and enforce more comprehensive privacy policies to protect everyone, to limit the scope and ability of healthcare data sharing thus to prevent de-anonymous events from happening.

________________
Reference:

1. Stanford Medicine. Terms and Conditions of Use. Stanford Medicine. https://med.stanford.edu/covid19/covid-counter/terms-of-use.html.
2. Stanford Medicine. Datasets. Stanford Medicine. https://med.stanford.edu/sdsr/research.html.
3. Stanford Medicine. Medical Record. Stanford Medicine. https://stanfordhealthcare.org/for-patients-visitors/medical-records.html.
4. Sutter Health. Terms and Conditions. Sutter Health. https://mho.sutterhealth.org/myhealthonline/terms-and-conditions.html.
5. Sutter Health. HIPAA and Privacy Practices. Sutter Health. https://www.sutterhealth.org/privacy/hipaa-privacy.
6. Wikipedia (14 May 2021). Health Insurance Portability and Accountability Act. Wikipedia. https://en.wikipedia.org/wiki/Health_Insurance_Portability_and_Accountability_Act.
7. Your Rights Under HIPAA. HHS. https://www.hhs.gov/hipaa/for-individuals/guidance-materials-for-consumers/index.html.
8. Adam Tanner (1 February, 2016). How Data Brokers Make Money Off Your Medical Records. Scientific American. https://www.scientificamerican.com/article/how-data-brokers-make-money-off-your-medical-records/.

What You Should Know Before Joining an Employee Wellness Program

What You Should Know Before Joining an Employee Wellness Program
By Ashley Moss | June 18, 2021

Weigh-in time at NBC’s The Office

In 2019 more than 80% of large employers offered a workplace wellness program as healthcare trends in America turn toward disease prevention. Some wellness programs focus on behavioral changes like smoking cessation, weight loss, or stress management. Participants might complete a health survey or undergo biometric tests in a lab. Many employers offer big financial incentives for participating and/or reaching target biometric values. While on the surface this may seem like a win-win for employers and employees, this article takes a closer look at the potential downsides of privacy compromise, unfairness, and questionable effectiveness.

Laws and regulations that normally protect your health data may not apply to your workplace wellness program. The federal government’s HIPAA laws cover doctor’s offices and insurance companies who use healthcare data. These laws limit information sharing to protect your privacy and require security measures to ensure the safety of electronic health information.

If a workplace wellness program is offered through an insurance plan, HIPAA applies. However, if it is offered directly through the employer, HIPAA does not apply. This means the program is not legally required to follow HIPAA’s anti-hacking standards. It also means employee health data can be sold or shared without legal repercussions. Experts warn that an employer with direct access to health data could use it to discriminate against, or even lay off, those with a high cost of care.

Although these programs claim to be voluntary, employers provide a financial incentive averaging hundreds of dollars. It’s unclear how much pressure this places on each employee, especially because the dollar penalty a person can afford really depends on their financial situation. There is some concern that employers have a hidden agenda: Wellness programs shift the burden of medical costs away from the employer and toward unhealthy or non-participating employees.

Wellness programs may be unfair to certain groups of people. Research shows that programs penalize lower wage workers more often, contributing to a “poor get poorer” cycle of poverty. Wellness programs may also overlook entire categories of people who have a good reason not to join, such as people with disabilities. In one case, a woman with a double mastectomy faced possible fines for missing mammograms until she provided multiple explanations to her employer’s wellness program.

Experts question the effectiveness of workplace wellness programs, since evidence shows little impact to key outcomes. Two randomized studies followed participants for 12+ months and found no improvement to medical spending or health outcomes. Wellness programs do not require physician oversight, so the interventions may not be supported by scientific evidence. For example, Body Mass Index (BMI) has fallen out of favor in the medical community but persists in wellness programs.

Wellness programs may focus on reducing sedentary time or tracking steps.

 Before joining an employee wellness program, do your homework to understand more about the potential downsides. Remember that certain activities are safer than others: are you disclosing lab results or simply attending a lecture on healthy eating? If you are asked to share information, get answers first: What data will be collected? Which companies can see it? Can it be used to make employment decisions? Lastly, understand that these programs may not be effective and cannot replace the advice of a trained physician.

References

  https://www.consumerreports.org/health-privacy/are-workplace-wellness-programs-a-privacy-problem/

https://www.researchgate.net/publication/323538785_Health_and_Big_Data_An_Ethical_Framework_for_Health_Information_Collection_by_Corporate_Wellness_Programs

 https://www.kff.org/private-insurance/issue-brief/trends-in-workplace-wellness-programs-and-evolving-federal-standards/

  https://hbr.org/2017/01/workplace-wellness-programs-could-be-putting-your-health-data-at-risk

 https://www.hhs.gov/hipaa/for-professionals/privacy/workplace-wellness/index.html

 https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html

 https://www.hhs.gov/hipaa/for-professionals/security/index.html

 https://www.hhs.gov/hipaa/for-professionals/breach-notification/breach-reporting/index.html

https://i.pinimg.com/originals/24/43/ba/2443ba5aaac0fface135795ca08d3c76.jpg

The Price of a Free Salad

The Price of a Free Salad
By Anonymous | June 18, 2021

How do you celebrate your birthday? If you’re anything like me, you prioritize friends, family, and cake, but you let some of your favorite corporations in on the mix, too. Each year, I check my email on my birthday to find myself fêted by brands like American Eagle, Nintendo, and even KLM. Their emails come in like clockwork, bearing coupon codes and product samples for birthday gifts. 

An image of multiple emails and their contents that offer birthday discounts
Birthday email promotions

Most of these emails linger unopened in my promotions tab, but my best friend makes an annual odyssey of redeeming her offers. One year I spent the day with her as we did a birthday coupon crawl, stopping by Target for a shopping spree and a free Starbucks coffee, Sephora for some fun makeup samples, and ending with complimentary nachos at a trendy Mexican restaurant downtown.

I used to want to be savvy like her and make the most of these deals, but lately, I’ve been feeling better about missing out. The work of Dr. Latanya Sweeney, a pioneering researcher in the field of data privacy has taught me what powerful information my birthday is. 

In her paper “Simple Demographics Often Identify People Uniquely”, Sweeney summarizes experiments that showed that combinations of seemingly benign personal details, such as birthday, gender, and zip code, often provide enough information to identify a single person. She and her team even developed this tool to demonstrate this fact. 

Sample uniqueness results
Sample results from the uniqueness tool

I keep trying this website out with friends and family, and I have yet to find anyone who isn’t singled out by the combination of their birthday, gender, and zip code. 

But what does this mean for our beloved birthday deals? Let’s think a bit about how this would work if you were shopping at, say, Target. You don’t have to shop at Target very often to see promotions of its 5% birthday shopping deal. 

Target circle birthday discount
Target circle birthday discount

Attentive readers may be wondering: “If Target knows who I am already, why do I care if they can identify me with my birthday?” This is a fair question. The truth is that, in our online, tech-enabled world, even brick and mortar shopping is no longer a simple cash-for-products exchange.

Target and other retailers are hungry to know more about you so that they can sell you more products. There are a lot of different ways that companies get information about you, but most fall into three main categories. 

1. They ask you questions that you choose to answer. 

2. They track your behavior online via your IP address, cookies, and other methods. 

3. They get data about you from other sources and match it up with the records that they have been keeping on you.

This last method makes your birthday tantalizing to retailers. Target already has access to data about you that is incomplete or anonymized, whether that’s from your online shopping habits or by tracking your credit card purchases in stores. Your birthday may just be the missing link that helps it get even closer to a full picture of who you are, which it will use to motivate you to spend more money.

Data exchange works both ways, so Target may decide to monetize that detailed profile that it has of you and your behavior with other companies. In fact, Target’s privacy policy says that it might do just that: “We may share non-identifiable or aggregate information with third parties for lawful purposes.” 

The more Target knows about you, the easier it would be for another company to identify you within a “non-identifiable” dataset. Even if companies like Target are diligent about removing birthdates from databases before sharing them, they are vulnerable to security breaches. Birthdays are often used by websites to verify identities. If your birthday is part of the information included in a security breach, that increases the odds that you will be targeted for identify theft and fraud.

After I started writing this post, I found an email from Sweetgreen that reminded me that somehow, despite years of using their app regularly, I still haven’t given up my birthday.

A Sweetgreen marketing email promising a discounted salad in exchange for my birthday.
A Sweetgreen marketing email promising a discounted salad in exchange for my birthday.

I’ve always loved a good deal, and I have a soft spot for $16 salads. I wonder, if I’m already being tracked, if my activity is being monitored, if my history is fragmented into databases and data leaks, why not get some joy out of it? Why not share free nachos with friends and pile a Sweetgreen salad with subsidized avocado and goat cheese? 

Ultimately, I still can’t justify it. My privacy and security is not for sale. At the very least, it’s worth much more than $10. 

Ring: Security vs. Privacy

Ring: Security vs. Privacy
By Anonymous | June 18, 2021

A little over 50 years ago, Mari Van Britten Brown invented the first home security system (timeline)– a closed circuit set of cameras and televisions with a panic button to cont act the police. In the years since, as with most household technology, advances have culminated to modern sleek and “cost effective” smart cameras such as the Amazon-owen Ring products. Ring’s mission, displayed on their website, is to “make neighborhoods safer”, proposing that connected communities lead to safer neighborhoods. Ring has also come under some scrutiny for partnering with police forces (WaPo) and, like most ‘smart’ devices and mobile apps, collects quite a bit of information from its users. While the first modern security systems of the 1960s had also relied on collaboration with the police, each individual household had its own closed circuit system, with explicit choice on when, and how much, to share with law enforcement. When Ring sets out to make neighborhoods safer, for whom are they making it safe? What is Ring’s idea of a safe neighborhood? 

Purple Camera
Purple Camera

Ring cameras have surged in popularity over the past few years, likely in some part due to the increase in home package deliveries bringing about an increase in package theft. With convenient alerts and audio/video accessible from a mobile device, the benefits of an affordable, accessible, and stylish security system loom large. 

Ring, as a company, collects each user’s name, address, geolocation of each device, and any audio/video content.  Closed circuit predecessors resulted in a data environment where each household had exclusive access to its own security footage. Thus, the sum of all surveillance was scattered among the many users and separate local law enforcement groups with whom users shared footage. Under Nissenbaum’s Contextual Integrity framework, trespassers on private property expect to be surveilled, the owners of the security systems have full agency over all transmission principle, or constraints on the flow of information. Owners can choose, at any time, to share any portion of their security data with the police. 

Ring, by contrast, owns all of the audio and video content of its millions of users, allowing all of the data to be centralized and accessible.  About 10% of police departments in the US have been granted access, by request, to any user’s security footage. Users often purchase Ring products expecting their service to check on packages and see who is at the door, as advertised. Along with this comes with the agreement that users no longer have the authority or autonomy to prevent their data from being shared with law enforcement.

Police City Eyeball
Police City Eyeball

Via Ring’s privacy policy, they can also keep any deleted user data for any amount of time. As is the case with many data focused companies, Ring also reserves the right to change its terms and conditions at any time without any notice. One tenant of responsible privacy policy is to limit the secondary use of any data without express informed consent. As Ring has been aggressively partnering with police and providing LAPD officers with free equipment to market their products, it is not unreasonable that all current and former audio/video footage and other data will be accessible to law enforcement without a need for a warrant. 

References

https://www.theguardian.com/commentisfree/2021/may/18/amazon-ring-largest-civilian-surveillance-network-us

https://www.washingtonpost.com/technology/2019/08/28/doorbell-camera-firm-ring-has-partnered-with-police-forces-extending-surveillance-reach/

https://timeline.com/marie-van-brittan-brown-b63b72c415f0

https://ring.com/

Drones, Deliveries, and Data

Drones, Deliveries, and Data
—Estimated Arrival – Now. Are We Ready?—

By Anonymous | June 18, 2021

It’s no secret that automated robotics has the ability to propel our nation into a future of prosperity. Industries such as Agriculture, Infrastructure, Defense, Medicine, Transportation, the list goes on; the opportunities for Unmanned Aerial Vehicles, in particular, to transform our world are plentiful and by the day we fly closer and closer to the sun. However, the inherent privacy related and ethical concerns surrounding this technology need to be addressed before they take to the skies. Before we tackle this, let’s set some context.

An Unmanned Aerial Vehicle (commonly known as a UAV or Drone) can come in many shapes and sizes, all without an onboard human pilot and with a variety of configurations tailored to its specific use. You may have seen a drone like the one below (left) taking photos at your local park or have heard of its larger cousin the Predator B (right) which is a part of the United States Air Force UAV fleet.

DJI and Predator Drone
DJI and Predator Drone

The use of drone technology in United States foreign affairs is a heavily debated topic that won’t be discussed today. Instead, let’s focus on a UAV application closer to home: Drone delivery.

There are several companies that have ascended the ranks of the autonomous drone delivery race namely Amazon, UPS, Zipline, and the SF based startup Xwing who hopes to deliver not just packages, but even people to their desired destination. Aerial delivery isn’t constricted by land traffic and therefore can take the shortest route between two points. If implemented correctly the resulting increase in transportation efficiency could be revolutionary. As recently as August 2020, legislature has been passed allowing for special use of UAVs beyond line-of-sight flight, which was the previous FAA regulation. This exposes the first issue that drone delivery brings. If not controlled by a human in a line-of-sight context, then the drone necessarily must use GPS, visual, thermal, and ultrasonic sensors to navigate the airspace safely. According to Amazon Prime air, their drone contains a multitude of sensors that allow “…stereo vision in parallel with sophisticated AI algorithms trained to detect people and animals from above.” Although it’s an impressive technological feat, any application where people are identified using camera vision needs to be handled with the utmost care. Consider the situation where a person turns off location services on their device as a personal privacy choice. A delivery drone has the capability to identify that person without their knowledge and combined with on board GPS data, that person has been localized without the use of their personal device or consent. This possible future could be waiting for us if we do not create strong legislature with clear language regarding the collection of data outside what is necessary for a delivery.

There’s another shortcoming with UAV delivery that doesn’t have to do with privacy: our existing infrastructure. 5G cellular networks are increasing in size and robustness around the nation which is promising for the future of autonomous delivery as more data can be transferred to and from the UAV. However, this reveals a potential for exclusion as the lack of 5G coverage may leave areas of this nation unreachable by drone due to the UAV flying blind or from running out of power. According to Amazon, the current iteration of Prime Air drone has a 15-mile range which leaves the question, “Is this technology really helping those who need it?”

Current 5G Coverage
Current 5G Coverage

It’s not all bad however, drone deliveries have the potential to create real, positive change in our world especially in light of the ongoing COVID-19 Pandemic. Privacy forward drone tech would help reduce emissions by both using electric motors and by allowing people to order items from the comfort of their own home in a timely manner, negating a drive to the store. It’ll be exciting to see what the future holds for UAV technology, and we must stay vigilant to ensure our privacy rights aren’t thrown to the wind.

References
AI, 5G, MEC and more: New technology is fueling the future of drone delivery. https://www.verizon.com/about/news/new-technology-fueling-drone-delivery

Drones Of All Shapes And Sizes Will Be Common In Our Sky By 2030, Here’s Why, With Ben Marcus Of AirMap.https://www.forbes.com/sites/michaelgale/2021/06/16/drones-of-all-shapes-and-sizes-will-be-common-in-our-sky-by-2030-heres-why-with-ben-marcus-of-airmap/

A drone program taking flight. https://www.aboutamazon.com/news/transportation/a-drone-program-taking-flight

Federal Aviation Administration. https://www.faa.gov/

China’s Scary But Robust Surveillance System

China’s Scary But Robust Surveillance System
By Anonymous | June 18, 2021

Introducing the Problem

In 2014, the Chinese government introduced an idea that would allow them to keep track of their citizens and score their behavior. It seemed that the government wanted to have a world where their people were literally constantly monitored — whether it’s where people shop, how people are paying bills, and even the type of content they are watching. In many ways, it is what major companies in the US like Google, Facebook, etc. are doing with data collection, but on steroids and at least they are letting you know that your every move was being watched. On top of that, you are being judged and given a score based on your interactions and lifestyle. A high “citizen score” will grant people rewards such as faster internet service. Some instances where a person’s score may be decreased include posting on social media that contradicts the Chinese government. Private companies in China are constantly working with the government to gather data through social media and other behaviors on the internet.

A key potential issue is that the government will be technically capable of considering the behavior of a Chinese citizen’s friends and family in determining his or her score. For example, it is possible that your friend’s anti-government political post could lower your own score. There, this type of scoring mechanic can have implications on relationships between an individual’s friends and family. The Chinese government is taking this project seriously, and scores that one may take for granted in the US may be in jeopardy based on a person’s score. One example is accessing a visa to travel abroad or or even the right to travel by train or plane within the country. People understand the risks and dangers this poses and as one internet privacy expert says, “What China is doing here is selectively breeding its population to select against the trait of critical, independent thinking.” However, because lack of trust is a serious problem in China, many Chinese actually welcome this potential system. To relate this back to the US, I am wondering if this type of system could ever exist in our country and if so, what that would look like. Is it ethical for private companies to assist in massive surveillance and turn over their data to the government? Chinese companies are now required to assist in government spying while U.S. companies are not, but what happens when Amazon or Facebook are in the positions that Alibaba and Tencent are in now?

A key benefit for China to have so many cameras and surveillance set up throughout their major cities is that it helps to identify any criminals and helps with keeping track of crime. For example, in Chongqing, where there’s more surveillance cameras than any city in the world for its population, the surveillance system scans facial features of people on the streets from frames of video footage in real time. As a result, the scans can be compared against data that already exists within a police database, such as photos of criminals. If a match passes, typically 60% or higher, police officers are notified. One could make an argument for the massive surveillance system as being beneficial for society, but if law officials are not being transparent and enforcing good practices, then there is an issue.

References:

Got Venmo? Protect Your Privacy

Got Venmo? Protect Your Privacy
By Anonymous | June 18, 2021

Phones With Venmo

Last month, BuzzfeedNews discovered President Joe Biden’s Venmo account and his public friends list. President Joe Biden’s and First Lady Jill Biden’s Venmo accounts were removed online the day the news broke. This news prompted Venmo to implement a new feature that allows users to hide their friends list. However, the default option for a users’ friend list is public so users will be able to see others’ friends unless they manually select to hide the list. The incident with President Joe Biden’s Venmo account and Venmo’s new feature have reraised concerns about Venmo’s privacy. Here’s some answers to some commonly asked questions about Venmo and Venmo’s privacy policy.

What Data Does Venmo Collect?

Currently, according to its privacy policy, Venmo collects a host of personal data including your name, address, email, telephone number, information about what device you are using to access Venmo, financial information (your bank account information), SNN (or other governmental issued verification numbers), geolocation information (your location), and social media information if you decide to connect your Venmo account with social media such as Twitter, FourSquare, and Facebook.

When you register for a personal Venmo account, you must verify your phone address, your email, and your bank account.

Why Should You Care About Venmo’s Privacy?

A lot of Venmo users view Venmo as a fun social media platform where they can share their transactions with accompanying notes and descriptions. They figure they’re not doing anything wrong so why should they care if their transactions are public? They don’t have anything to hide. It is not just about hiding bad information, although this may be some users’ goal, but also protecting good information from others. What do I mean by this?

According to Venmo’s privacy policy, “public information may also be seen, accessed, reshared or downloaded through Venmo’s APIs or third-party services that integrate with our products” meaning that all of your public transactions and associated comments are available to the public. Even non Venmo users can discover your data by accessing the API.

In 2018, Mozilla Fellow Hang Do Thi Duc released “Public By Default”, an analysis of all 207,984,218 public Venmo transactions from 2017. Through these transactions, she was able to discover drug dealers, breakups, and the routine life of a married couple. She was able to discover such information as where the married couple shopped and what days they usually went to the grocery store, what gas they used, and what restaurants they usually ate at. She was able to discover a drug dealer and where he lived based on his public transaction comments and the fact that his Facebook was linked to his Venmo. Thus, Venmo transactions can act as a map of your daily activities. It can be quite easy to learn about an individual through both their transactions and friends list.

Example Venmo API Image
Image of Venmo API

Your personal data may become more publicly available if you connect your account to third parties such as social media platforms. According to Venmo’s privacy policy, data shared with a “third-party based on an account connection will be used and disclosed in accordance with the third-party’s privacy practices” and “may in turn be shared with certain other parties, including the general public, depending on the account’s or platform’s privacy practices.” This means that if you connect your account with a third party, Venmo and the third party will exchange personally identifiable information about you. The information Venmo shares about you with the third party is subject to the third party’s privacy meaning that data is no longer protected by Venmo’s privacy policy. If the third party’s privacy policy states that personal information can be shared publicly, private information you have shared with Venmo can then become public.

How Can You Protect Your Privacy?

You can protect your data by making both your transactions and friends list private. These are both automatically public. You can also make your past transactions private and prevent Venmo from collecting some of your location data by turning location services for Venmo off using your mobile device. Article for how to do both of these here.This should prevent anyone from publicly accessing your Venmo transactions or friends list and prevent some geolocation tracking although Venmo may still be able to view your location.

Venmo Privacy Settings
Venmo Privacy Settings

Also be sure to read a firm’s privacy policy before you decide to connect your account with them in any way. Before connecting with any social media apps, if you haven’t already, read the social media platform’s privacy policy to see if their privacy practices match with what you would feel comfortable sharing. If you’ve already connected with social media apps, be sure to read the privacy policies of other third parties that ask to connect with your account in the future.

You should also be cautious about your Venmo profile picture. You may figure if you regret a past Venmo profile picture, you can just delete this photo and post a new one. However, this is not the case. It is still possible to recover a user’s old Venmo profile picture after they have replaced it with a new one simply by changing the photo’s URL. Try to post photos that you do not mind being public for the foreseeable future.

In summary, privacy matters especially when it concerns financial data that reveals patterns about your lifestyle. Set your transactions and friends list to private, turn off location services, be wary of connecting your account to third parties, and post profile pictures that you do not mind being public.

References:

Duc Do Thi, Hang. (2018). Public By Default [Project]. https://publicbydefault.fyi/

How to Sign Up for a personal Venmo account. Venmo. (2021). https://help.venmo.com/hc/en-us/articles/209690068-How-to-Sign-Up-for-a-personal-Venmo-account.

Mac, R., McDonald, L., Notopoulos, K., & Brooks, R. (2021, May 15). We Found Joe Biden On Venmo. Here’s Why That’s A Privacy Nightmare For Everyone. BuzzFeed News. https://www.buzzfeednews.com/article/ryanmac/we-found-joe-bidens-secret-venmo.

Mozilla Foundation. (2019, August 28). Venmo, Are You Listening? Make User Privacy the Default. Mozilla . https://foundation.mozilla.org/en/blog/venmo-are-you-listening-make-user-privacy-default/

Notopoulos, K. (2021, May 19). Venmo Exposes All The Old Profile Photos You Thought Were Gone. BuzzFeed News. https://www.buzzfeednews.com/article/katienotopoulos/paypals-venmo-exposes-old-photos?ref=bfnsplash.

Payment Activity & Privacy. Venmo. (2021). https://help.venmo.com/hc/en-us/articles/210413717.

Perelli, A. (2021, May 30). Venmo added new privacy options after President Joe Biden’s account was discovered. Business Insider. https://www.businessinsider.in/tech/news/venmo-added-new-privacy-options-after-president-joe-bidens-account-was-discovered/articleshow/83074180.cms.

Photo Credits:

https://time.com/nextadvisor/credit-cards/venmo-guide/

https://publicbydefault.fyi/

https://mashable.com/article/venmo-cash-app-paypal-data-privacy/

 

Neurotechnologies, Privacy, and the need for a revised Belmont Report

Neurotechnologies, Privacy, and the need for a revised Belmont Report
By Filippos Papapolyzos | June 18, 2021

Ever since Elon Musk’s launch of his futuristic venture Neuralink in 2016, the popularity of the field of neurotechnology – which sits at the intersection of neuroscience and engineering – has skyrocketed. Neuralink’s goal has been to develop skull-implantable chips that interface one’s brain with their devices, a technology that the founder said, in 2017, is 8 to 10 years away. This technology would have tremendous potential for individuals with disabilities and brain-related disorders, such as by allowing speech-impaired individuals to regain their voice or paraplegics to control prosthetic limbs. The company’s founder, however, has largely focused on advertising a relatively more commercial version of the technology, that would allow everyday users to control their smartphones or even communicate with each other using just their thoughts. To this day, the project is still very far from reality and MIT Technology Review has dubbed it neuroscience theatre aimed at stirring excitement and attracting engineers while other experts have called it bad science fiction. Regardless of the eventual successfulness of brain-interfacing technologies, the truth remains that we are still very underprepared from a legal and ethical standpoint, due to their immense privacy implications as well as the philosophical questions they pose.

The Link
The Link

Implications for Privacy

Given that neurotechnologies would have direct real-time access to one’s most internal thoughts, many consider them to be the last frontier of privacy. All our thoughts, including our deepest secrets and perhaps even ideas that we are not consciously aware of, would be digitized and transmitted to our smart devices or the cloud for processing. Not unlike other data we casually share today, this data could be passed on to third parties such as advertisers and law enforcement. By processing and storing this information on the cloud, it would be exposed to all sorts of cybersecurity risks, that would put individuals’ most personal information and even dignity at risk. If data breaches today expose proxies of our thoughts – i.e the data we produce – data breaches on neural data would be exposing our innermost selves. Law enforcement could surveil and arrest individuals simply for thinking of committing a crime and malicious hackers could make us think and do things against our will or extort money for our thoughts.

A slippery slope argument that tends to be made in regards to such potential data sharing is that, in some ways, it already happens. Smartphones already function as extensions of our cognition and, through them, we tend to disclose all sorts of information through social media posts or apps that monitor our fitness, for instance. A key difference between neurotechnologies and smartphones, however, is the voluntariness of our data sharing today. A social media post, for instance, constitutes an action, i.e a thought that has been manifested, and is both consensual and voluntary in the clear majority of cases. Instagram may be processing our every like or engagement but we still maintain the option of not performing that action. Neuralink would be tapping into thoughts, which have not yet been manifested into action as we have not yet applied our decision making skills to judge the appropriateness of performing said action. Another key difference is the precision of these technologies. Neurotechnologies would not be humanity’s first attempt at influencing one’s course of action but they would definitely be the most refined. Lastly, the mechanics of how the human brain functions still remain vastly unexplored and what we call consciousness may simply be the tip of the iceberg. If Neuralink were to expose what lies underneath, we would likely not be positively surprised.

Brain Privacy
Brain Privacy

Challenges to The Belmont Report

Since 1976, The Belmont Report has been a milestone for ethics in research and is still largely consulted by ethical committees, such as Institutional Review Boards in the context of academic research, having set the principles of Respect for Persons, Beneficence, and Justice. With the rise of neurotechnologies, it is evident that these principles are not sufficient. The challenges brain-interfacing poses have led many experts to plead for a globally-coordinated effort to draft a new set of rules guiding neurotechnologies.

The principle of Respect for Persons is rooted in the idea that individuals act as autonomous agents, which is an essential requirement for informed consent to take place. Individuals with diminished autonomy, such as children or prisoners, are entitled to special protections to ensure  they are not taken advantage of. Neurotechnologies could, potentially, influence one’s thoughts and actions, thereby undermining their agency and, as a result, also their autonomy. The authenticity of any form of consent provided after an individual has interfaced with neurotechnology would be subject to doubt; we would not be able to judge the extent to which third parties might have participated in one’s decision making. 

From this point onwards, the Beneficence principle would also not be guaranteed. Human decision-making is not essentially rational, which may often lead to one’s detriment, but when it is voluntary it is seen as part of one’s self. When harm is not voluntary, it is inflicted. When consent is contested, autonomy is put under question. This means that any potential harm suffered by a person using brain-interfacing technologies could be seen as a product of said technologies and therefore as a form of inflicted harm. Since Beneficence is founded on the “do not harm” maxim, neurotechnologies pose a serious challenge to this principle.

Lastly, given that the private company would, realistically, most likely sell the technology at a high price point, it would be accessible only to those who can afford it. If the device were to augment users’ mental or physical capacities, this would severely magnify pre-existing inequalities on the basis of income as well as willingness to use the technology. This poses a challenge to the Justice principle as non-users could receive an asymmetric burden as a result of the benefits received by users.

In the frequently quoted masterpiece 1984, George Orwell speaks of a dystopian future where “nothing was your own except the few cubic centimeters inside your skull”. Will this future soon be past?

References

Can privacy coexist with technology that reads and changes brain activity?

https://news.columbia.edu/content/experts-call-ethics-rules-protect-privacy-free-will-brain-implants-and-ai-merge

https://www.cell.com/cell/fulltext/S0092-8674(16)31449-0

https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/index.html

https://www.csoonline.com/article/3429361/what-are-the-security-implications-of-elon-musks-neuralink.html

https://www.georgetowntech.org/publications-blog/2017/5/24/elon-musks-neuralink-privacy-protections-for-merged-biological-machine-intelligence-sa3lw

https://www.technologyreview.com/2020/08/30/1007786/elon-musks-neuralink-demo-update-neuroscience-theater/

https://www.sciencetimes.com/articles/31428/20210528/neuralink-brain-chip-will-end-language-five-10-years-elon.htm

https://www.theverge.com/2017/4/21/15370376/elon-musk-neuralink-brain-computer-ai-implant-neuroscience

https://www.inverse.com/science/neuralink-bad-sci-fi

Privacy Concerns for Smart Speakers

Privacy Concerns for Smart Speakers
By Anonymous | June 18, 2021

It is said that by the end of 2019, 60 million Americans own at least one smart speaker at home. Throughout the past 7 years, ever since the rise of Amazon’s Alexa in 2014, people have become more reliant on smart speakers to help out with mundane tasks such as answering your questions, making calls, or scheduling appointments without you opening your phone. With the same network, these smart devices are also able to connect to the core units of your home, such as the lighting, temperature, or even locks in your home. Although these devices have many benefits to everyday life, one can’t help but question some downsides, especially when it comes to privacy concerns.

For those that do own a smart speaker such as Google Home or Alexa, how many times have you noticed that the system will respond to your conversation without actually calling on it? If your next question is: are smart devices always listening, I am sorry to inform you, yes, they are always listening.

Google Home

Even though Google Home is always listening in, it is not always recording your every conversation. Most of the time, it is on standby mode waiting for you to activate it by saying “Hey Google” or “Okay Google. However, you may notice that many times, Google Home can accidentally get activated without you saying the activation phrases. This is because sometimes in your conversation you might say things that sound similar which might trigger your device to start recording the conversation.

Based on a study done by researchers at Northwestern University and Imperial College London, they found that Google Home mini exhibited 0.95 average activations per hour while playing The West Wing through triggers of a potential wake word. This high occurrence can be problematic when it comes to privacy concerns, especially in the form of information collection and surveillance. User’s aren’t consenting to being watched, listened, or recorded while having conversations in the scope of their own home and can often make them feel inhibited or creeped out.

Alexa

Alexa on the other hand, has also had their fair share of privacy invasions. In 2018, an Amazon customer in Germany was mistakenly sent about 1,700 audio files from someone else’s Echo, providing enough information to name and locate the unfortunate user and his girlfriend. They attributed this to a human error without having any other explanation. It is also revealed that the top five smart home device companies have been using human contractors to analyse a small percent of voice-assistant recordings. Although the recordings are anonymised, they often contain enough information to identify the user, especially when the information is regarding medical conditions or other private conversations.

How to secure the privacy of your smart speaker

Based on some tips from Spy-Fy, here are some steps you can take to secure your Google Home device. Other devices should have a similar process.

● Check to see what the device has just recorded by visiting the Assistant Activity page.

● If you ever accidentally activate the Google Home, just say “Hey Google that wasn’t for you” and the assistant will delete what was recorded.

● You can set up automatic data deletion on your account or tell your assistant “Hey Google, delete what I said this week”.

● Turn off the device if you are ever having a private conversation with someone. This will ensure that the data is not being recorded without your permission.

Some other tips could be to limit what the smart speaker is connected to incase of a data breach. Best option would be to separate the smart devices with other sensitive information by using another Wi-Fi network.

References:

1. https://routenote.com/blog/smart-speakers-are-the-new-big-tech-battle-and-big-pri vacy-debate/

2. https://www.securityinfowatch.com/residential-technologies/smart-home/article/21 213914/the-benefits-and-security-concerns-of-smart-speakers-in-2021

3. https://spy-fy.com/google-home-and-your-privacy/#:~:text=Smart%20devices%20li ke%20Google%20Home%20are%20always%20listening.,security%20of%20their% 20Google%20Home.

4. https://moniotrlab.ccis.neu.edu/smart-speakers-study-pets20/

5. https://www.theguardian.com/technology/2019/oct/09/alexa-are-you-invading-myprivacy- the-dark-side-of-our-voice-assistants

Life360

Life360
By Anonymous | June 18, 2021

Life360 is a location-sharing app that started in 2008 and has since accumulated 27million users. They claim their mission is to “help families better coordinate and stay protected” by allowing for open communication and complete transparency that allows for frictionless coordination when managing hectic schedules. However, the actual effect this app (and other tracking apps) has on families– especially parent-child relationships– goes far beyond coordination.

Life360 UI
Life360 UI

Safety
One of the main selling points of the app is its safety features. Life360 operates on a Freemium basis where it’s free to download and use, and users can pay extra to gain access to additional features. As a non-paying user, users have access to live location updates, driving speed, cell-phone battery levels, and saved places. Paying users have additional roadside assistance, driving alerts, crash detection, automatic SOS, and access to 30-day travel histories. These features appeal especially to parents who want to make sure their kids are staying safe and out of trouble. However, as one can imagine, it can also end up being overbearing as well. A parent may call out their child for speeding by looking at their maximum speed during the drive, when they were only speeding for a few seconds to overtake a slower car. Although the app provides features meant to increase safety, the excessiveness of these features may actually result in a false sense of security as children try to find ways around being surveilled. Kids may choose to just leave their phones behind when going out and end up in an emergency situation without a way to contact their parents. Parents end up outsourcing the responsibility of keeping their children safe without actually investing time and energy in creating a healthy dialog. Alternatively, there have also ben cases where kids secretly download the app onto their parents’ phones to notify them when their parents are coming home.

Life360 Payment Plans
Life360 Payment Plans

Invasion of Privacy
Children do need regular parental supervision, but there is a fine line between parental supervision and parental surveillance. Adults today are more active in managing their kids’ lives than ever before and despite the strong deference to parental freedoms and parental rights by our legal system, using Life360 to monitor kids this way may well be an invasion of privacy. In a regular setting, individuals are able to make choices about whether or not they want to use the app or turn on their location. However, in the parent-child context, children are often dependent on their parents and must do as asked. Realistically, as long as kids are living at home, there isn’t real privacy. Even when they’re college students, as long as they’re financially dependent on their parents, they don’t have the full freedom to choose.

Impact on Parent-Child relationships
Installing an app like Life360 on a child’s phone may impact their trust or ability to practice independence. The convenience and peace of mind that parents gain from being able just check the app whenever they want comes at the cost of communication with their child and important focus needed for building a real relationship. Children no longer get to experience the same freedoms their own parents had of just being “back by dark” and are instead pushed to follow schedules that always keep their parents in the loop. This kind of surveillance adds unnecessary stress where even if they aren’t doing anything harmful, kids are pressured to notify their parents about anything unexpected that comes up– stopping for ice cream, dropping things off at a friends’ house. The app’s presence leads kids to feel like they’re constantly being watched, even if their kids aren’t always monitoring. Even from the parents’ perspective, there are some things they would rather not know. For example, if the app reports to them that their child is speeding, it becomes difficult to ignore the piece of information they’ve received. The use of tracking-apps may also indicate a lack of faith in children and end up being very disheartening and discouraging. It can make children less likely to confide in their parents when problems arise outside of the app’s scope of detection.

Is There a Solution?
Life360 is a prime example of how there is always the possibility of well-intended tools to be misused or have unintended consequences. The availability of such a product has the power to shape parent behavior as parents who may not have previously thought such a product was necessary now feel like they should use them simply because it is now an option. They are likely to jump in with the idea that having safety measures is always better without fully understanding the possible repercussions of using the app. Additionally, the presence of so many features give parents the pressure to utilize and check all of them. A “crash detection” feature immediately causes parents to stress out and get anxious more than normal. The app can change people’s behaviors in ways that likely were never intended, adding stress to both parents’ and children’s lives. It can work well for adults who can make their own decisions about whether or when to use the app. They can ensure safety when walking home at night and easily share their location if lost or stranded. But when it comes to parent-child relationship, the dynamics of the relationship makes the use and consequences of the app complicated. This brings to question what kind of responsibilities the creators of these apps have. Or does it fall entirely to the user to make sure the app is used responsibly?

https://www.wired.com/story/life360-location-tracking-families/

https://www.life360.com/privacy_policy/https://www.theintell.com/news/20190809/tracking-apps-spark-debate-over-protection-and-privacy/1

Why Tracking Apps Are More Harmful Than Helpful (OPINION)

https://www.forbes.com/sites/thomasbrewster/2020/02/12/life360-comes-at-you-fast–cops-use-family-surveillance-app-to-trace-arson-suspect/?sh=5518dbd5380a