How to seek abortion care with a minimal data trail

How to seek abortion care with a minimal data trail
Jenna Morabito | July 7, 2022

Thinking like a criminal in a post Roe v. Wade country

America is starting to roll back human rights: law enforcement is already using facial recognition to arrest protestors, and could start using location and menstruation data to prosecute women seeking an abortion. It is important that people know what options they have to stay private, and are aware that privacy comes with significant challenges and inconveniences.

Step 1: Remove app permissions

This advice has been all over the internet since the Supreme Court repealed Roe v. Wade, but it bears repeating: request that your period tracking app deletes your data, close your account, and delete the app. Cycle tracking apps and other businesses have been caught selling health data to Facebook and other companies, giving law enforcement more places to find your data. Even if companies don’t sell your data, they will be bound by subpoena to turn it over to law enforcement in the event of an investigation. I am not a law professional, but my guess is that using a foreign-owned app isn’t enough either: the European-based cycle tracking app Clue might have to relinquish your data to US law enforcement under the EU-US Data Protection Umbrella Agreement. How to request erasure varies by app, but a tutorial on deleting your data from Flo can be found here and one method of tracking your cycles by hand can be found here.

Then, go through all your apps and remove as many permissions as you can. This isn’t a trivial task; I found multiple places in my phone settings to restrict apps’ data access, and had to go through each one individually. As part of removing permissions I turned off location tracking on my phone, but location can still be very accurately pinpointed with Wi-Fi location tracking (your phone communicating with every network on the street you’re walking down), and your general location is known to the sites that you visit through your IP address – which is a unique identifier to your device. The best way to disable Wi-Fi tracking is to turn off Wi-Fi on your phone, and hiding your IP address can be done by using a Virtual Private Network.

Step 2: Secure your web browsing

Step 2.1: Get a VPN

A Virtual Private Network, or VPN, is a privacy heavyweight. Without a VPN, the government, businesses, and other entities can track your IP address across websites and even see your activity on a website, depending on the security protocol of the specific site, meaning that law enforcement could see that you visited websites having to do with pregnancy or abortion. A VPN is your shield: you (and your unique IP address) connect to the VPN’s server, and the VPN forwards your request on, but with their IP address for the website to send information back to. In addition, they encrypt everything about your request. No one can see your IP address, what websites you visit, or what you do on those websites.

A VPN sends all your traffic through an encrypted tunnel.

Not all VPNs are created equal, though. As your shield they can see what websites you go to, and some VPNs turn around and sell that information. Others don’t sell it but do keep records of your activity, which means that we have the same subpoena problem as before, and so it’s important to use a VPN that doesn’t keep logs. There are a few; the one I use is NordVPN. I encourage you to do some research and find a good service for you, but Nord doesn’t keep logs, has a variety of security features, and can be bought with cash at places like Best Buy or Target, making it an anonymous purchase. They also have seamless integration with the collection of technologies called Tor, which makes the next step simpler. One thing to note though, is that if you are logged in to your Google account at the browser or device level, or use any Google apps on your phone, then Google sees and stores all your browsing data before it gets encrypted, bypassing the privacy that the VPN offers. Therefore, I suggest using Google services only when strictly necessary, and suggest using more privacy-focused services like the ones that I recommend in later sections.

Step 2.2: Use Tor

Tor is two things: a networking system, and a special browser. The networking system works by bouncing your request around between three random servers, where one layer of encryption gets peeled off at each stop so that upon exiting the network, the website knows what your request is. Then, the information you requested is triple-encrypted and sent back to you along a different route.

To see the interactive version, click here.

To access the network system you download the Tor browser, or use the network through NordVPN or another service with Tor integration. Though Tor is a sophisticated technology that makes tracking people a headache, it’s not perfect. Cybernews.com puts the additional steps that should be taken when using Tor succinctly:

  1. Don’t log into your usual accounts – especially Facebook or Google
  2. Try not to follow any unique browsing patterns that may make you personally identifiable.
  3. Turn the Tor Browser’s security level up to the max. This will disable JavaScript on all sites, disable many kinds of fonts and images, and make media like audio and video click-to-play. This level of security significantly decreases the amount of browser code that runs while displaying a web page, protecting you from various bugs and fingerprinting techniques.
  4. Use the HTTPS Everywhere extension. This will ensure you’re only browsing HTTPS websites and protect the privacy of your data as it goes between the final node and the destination server.
  5. As a general rule, never use BitTorrent over Tor. Although people illegally pirating copyrighted content may wish to obscure their real identity, BitTorrent is extraordinarily difficult to use in a way that does not reveal your real IP address. Tor is relatively slow, so BitTorrent is hardly worth using over Tor anyway.
  6. Most importantly, always keep Tor Browser (and any extensions) updated, reducing your attack surface.

You shouldn’t log in to Facebook or Google because those sites are notorious for tracking you, but also because logging into any site with personally identifiable information is a clear indication that you’ve been there, much like using your keycard to enter your office building. I will discuss this more later, but this means that special precautions should be taken when ordering abortion pills online, as your shipping and payment information lead straight to you. In short you don’t need to use Tor every day, only when searching for sensitive information. For suggestions on a more casual privacy-focused browser, check out this list.

Additionally, you should always turn your VPN on before using Tor, otherwise you may attract your internet service provider’s suspicion. While Tor is legal to use and the New York Times even has a Tor-specific website so that people living under oppressive regimes can access outside news, Tor is known to facilitate serious crime: drug trafficking, weapons sales, even distributing child pornography. I feel significant discomfort with recommending that an average citizen use a technology that is in part sustained by truly evil crime, but what’s the appropriate balance between privacy and public safety if you can’t trust your government to protect basic human rights?

Step 3: Ditch Google

Google is ubiquitous, convenient, and free to use – although since you’re not paying with money, you’re paying with your data. I’m focusing on Google here since it’s so popular, but the type of data that Google collects is likely collected by other big companies, too.

Google Chrome and Google Search each know a surprising amount of information about you: your device’s timezone, language, privacy settings, and even what fonts you have installed; this collection of things comprises your browser’s potentially unique fingerprint, and you can check out yours here. According to RestorePrivacy.com there’s not a lot you can do about your smartphone’s fingerprint, but you can minimize digital fingerprinting on your computer by tinkering with settings and add-ons or changing browsers. The browser Brave offers built-in fingerprint protection as well as easy Tor access and seems like a good out-of-the-box option for those who don’t want to fuss, although I haven’t tried it myself.

Most notably, Chrome and Google Search store your browsing history, which law enforcement might then be able to access with a keyword warrant, identifying individuals who made searches deemed to be suspicious. To avoid getting attention when searching for abortion providers, you should switch to a search engine that logs the minimum amount of data and/or stores the log outside the U.S and EU. There isn’t any one silver bullet, but RestorePrivacy.com suggests a few privacy-focused search engines here.

Google Maps similarly stores, sells, and will share your data with law enforcement, so you may want to use DuckDuckGo’s free map service built on top of Apple Maps, although you lose some of Apple Maps’ functionality. Email providers’ privacy violations are particularly bad, though: Google was caught reading users’ emails in 2018, and Yahoo scanned all users’ incoming emails in real time for the FBI – American companies are bound to under the PRISM Act. Google also scanned users’ emails for receipts and stored them, which would be dangerous for those buying abortion pills.

Beyond just reading your emails, having a Google account signs you up for all sorts of tracking; check it out yourself at myactivity.google.com. I found YouTube videos that I’d watched years ago, every hole in the wall café I’ve ever been in; the information that Google has on me creates a fuller timeline of my life than my journal does. More private email services are less convenient and are all paid subscriptions, but a tutorial on how to evade regressive laws wouldn’t be complete without discussing communication technologies. This article does a good job laying out pros and cons of different email services as well as explaining which jurisdictions to look for: the U.S. and some European companies are subject to invasive requests from law enforcement, so even if Google stopped selling data to advertisers, users would still be at risk.

Finally, be careful where you send messages. Text messages are not encrypted, meaning that they can be read by your service provider. Signal is a free messaging app that only stores three things about you: “the phone number you registered with, the date and time you joined the service, and the date you last logged on” (RestorePrivacy.com). They don’t store who you talk to or what you talk about and they encrypt your message, making it a good place to create sensitive plans.

Step 4: Seek medical care (carefully)

With this knowledge, I hope that you can safely seek the care you need. With today’s level of surveillance I’m not confident that I even know all the ways that our movements are tracked once we leave the house: license plate scanners give our travel path in almost real-time, GPS-enabled cars do the same, so does calling an Uber, and so on. Assuming that you can drive 400+ miles to the nearest abortion clinic without detection, you’ll want to buy a simple burner phone for your travels so that mobile proximity data (what other mobile phones your phone has been near) don’t give you away, and I suppose that you should pay for your hotel room with a prepaid visa card that you bought in cash.

Or, if at all possible, get a friend living in a state where abortion is legal to buy the pills for you and ship them to you, so you can avoid revealing your payment information and address online. Coordinate with them on Signal and pay them back with a prepaid visa card.

Conclusion

It’s dystopian. The American government has abandoned its citizens in a new and exciting way and as always, has disproportionately harmed poor womxn. I know that this tutorial isn’t the most accessible since it requires time, good English, and computer literacy to implement. I know that privacy costs money: NordVPN charges $40-70 a year, a private email account is $20-50 a year, and that’s before the cost of an abortion or travel expenses.

However, I didn’t master all these technologies in a day, either. I’ve implemented – and continue to implement – more private practices around my data one at a time. The world is scary right now, but we can help each other get through it by sharing our expertise and resources. I hope that I’ve helped clarify how to take control of your data, which is a tedious process but empowering. We don’t owe corporations, and certainly not law enforcement, any of our intimate information.

“I can promise you that women working together – linked, informed, and educated – can bring peace and prosperity to this forsaken planet.” — Isabelle Allende

Or at least to my forsaken search history.

Roe v Wade Overturned: Creating Unforeseen Data Privacy Consequences?

Roe v Wade Overturned: Creating Unforeseen Data Privacy Consequences?
Anonymous | July 7, 2022

The Supreme Court’s jarring decision to overturn Roe v. Wade, banning abortion in many states across the country, has created an unforeseen consequence in data privacy. In an increasingly digital era, will data about abortions be used against women?

Data Privacy Concerns in The Aftermath of Roe v. Wade

The Supreme Court’s jarring decision to overturn Roe v. Wade has brought the United States back to an era before 1973. However, the increasingly digital world in 2022 is vastly different from that of 1973, with concerns that nobody knew existed until now. Nowadays, digital footprints are regularly used for legal prosecution, raising the concern that data may be used against those seeking abortions. In addition to call logs, texts, and other forms of communication, there now exists location specific data, payment records, and a whole host of information about abortion seeking women that did not exist in the past. What is even more frightening is the lack of protections and governance around this data. The question becomes, who will govern this new uncharted data territory?

The Current Landscape for Abortion Data Privacy

In the past, there have been several known cases that have used text messages and search history in abortion convictions. However, this data sphere will not grow to beyond basic forms of communication. For example, a simple Uber ride to an abortion clinic may be evidence in a legal case. However, the United States is not yet privy to the data privacy concerns that lie ahead. The Supreme Court deferring abortion to individual states presents a unique gray area for data governance regarding those abortions. Should data governance and protection come from the federal level of government? Or is this too a state’s decision to make? And who will relegate technology companies when it comes to collecting and protecting user data? These are all questions that are currently unanswered. In response to the overturning of Roe v. Wade, many companies have issued general public advisories to the public of how to best protect their abortion data, with figure 1 being a prominent example of this trend.

A Company trying its best to help users navigate data privacy for abortion related data.

Ethical and Privacy Concerns of Abortion Data

With the emergence of this new field of data privacy for abortion related data, there are several opportunities for womens’ data to be exploited that need to be further explored. These concerns are best explored through the lens of Solove’s Taxonomy. Essentially, Solove’s Taxonomy divides potential data privacy concerns into several facets. In the context of our abortion related privacy concerns, some facets are more relevant than others. The first relevant facet is information processing, which covers concerns such as secondary use of data. In this case, it is entirely possible that sensitive data about abortions, for example from an app that tracks periods, can be sold, and exploited by a secondary party. The next relevant facet of Solove’s Taxonomy is information dissemination, which is perhaps the most relevant concern for abortion related data.  This facet covers breach of privacy, as well as blackmail. For a user’s data that may be tied to an abortion, there is absolutely a chance that the confidentiality of this data can be breached. Additionally, if this data falls into the wrong hands, it can be used to blackmail users as well. Lastly, as found in figure 2, the country remains heavily divided on the issue of abortion. Another question that comes about from this is how will states share information about abortions if people are traveling over state lines to gain access to abortions? For example, if someone travels from Arizona to California to gain access to an abortion, how is information sharing going to work between the two states? There needs to be more concrete protections around the flow of information from state to state.

A country divided when it comes to the issue of abortion. How does information flow between these divided states?

What Lies Ahead

In this new era of outlawed abortions in parts of the country, there needs to be more proactive legislation and protects the privacy interests of individuals tied to an abortion. Perhaps the most concerning situation that requires protection is the bounty system some states, such as Texas, are employing to police abortions. Here, any citizen can file a lawsuit to report an abortion, and potentially win 10,000 USD. With a digital trail of abortion related data only growing further, how will someone’s data be kept safe to ensure it is not. Exploited or used against them? There remain major governance and ethical questions surrounding this issue that may take longer than expected to sort out.  

AI For Gun Regulation: Not the Best Solution?

AI For Gun Regulation: Not the Best Solution?
Joyce Li | July 7, 2022

Though school districts are pouring money into AI-based gun detection products in light of recent shootings, widespread implementation of these products may not be overall beneficial for society.

Image 1: Gun-free sign outside of a school.

 

In the first 6 months of 2022, there have been at least 198 mass shootings in the United States. Especially in the light of the recent Highland Park Parade and Robb Elementary Uvalde shootings, many groups have begun lobbying for increased gun control regulations as a response to increasing gun violence throughout the country. In the meantime, schools and government agencies have turned their attention to AI products that can aid with gun detection. Existing technologies vary from AI camera feed scanners to 3D imaging and surveillance products. These products aim to detect guns quickly before a shooting happens; the idea is that if people and law enforcement are notified that there is a gun in the area, people can get to safety before any harm can be done. However, with technical limitations along with the high prices associated with the technology, it is difficult to justify AI gun detection technology as the best solution to reducing gun violence in the US. Despite the beneficial intent behind these systems, there is much skepticism about whether or not these systems are ready for deployment.

Product Limitations

Many gun detection products are already on the market and actively being used by schools and government agencies. However, these products already have several limitations in their current use cases. For example, ZeroEyes is a product that scans school camera feeds using ML to identify up to 300 types of firearms. Because state-of-the-art algorithms are not quite at a stage where accuracy can be 100%, military veterans are employed by the company 24/7 to check whether flagged footage is correctly classified as guns. Despite how well the technology could work, a major issue with the system is that the algorithm is only trained to classify guns in plain sight. This brings into question how useful the technology really is, given that experienced shooters would not typically brandish their guns in plain sight when planning for a large-scale shooting.

Image 2: ZeroEyes gun detection example.

Another example is with the company Evolv, which an AI-based metal detector that aims to flag weapons in public spaces. The system is currently already being used by sports stadiums such as the SoFi Stadium and in North Carolina’s Charlotte-Mecklenburg’s school system as a scanner before people enter the area. When suspicious objects are detected, a bounding box is immediately drawn around the area where a person could be carrying the item, allowing security officers to continue with a manual pat-down. Again, despite the potential for the tool to reduce friction in the security process, there are still major technical limitations for the tool despite its establishment in the market. One major issue is that the AI still confuses other metal objects as guns, such as Google Chromebook — quite a large mistake to be making in a school setting. The fact is that ML models are nowhere near close to perfect still, and the fact that manual checks are still a large part of both mentioned products signals that gun-detection AI may not be ready for full adoption.

Ethical Implications

In addition to the technical limitations of state-of-the-art gun detection algorithms, there are many questions about how ethical such products are when put into use. The main concern is how to ensure that these safety tools can be equally and fairly accessed by everyone. The justice principle from the Belmont Principles emphasizes that benefits from a fair product should be distributed equally. Because these products have such steep costs ( for reference, ZeroEyes costs $5,000/month for 200 cameras and Evolv costs $5 million for 52 scanners), it does not seem that schools and other institutions from low-income communities can afford such security measures. These prices sees even more unfortunate given that areas with higher income inequality are more likely to see mass shootings. This also leads to the question of why school districts, which are already notoriously underfunded in the US government system, should have to spend extra money to ensure that students can simply stay safe at school.

According to another aspect of the Belmont principles, all subjects should be treated with respect, meaning that they should be treated autonomously and given the choice to participate in monitoring or to opt out. With widespread technology, to what extent should students or sports arena attendees be monitored? How does an institution get consent from these people to be scanned and to use their data to train machine learning algorithms? There has always been a balance to be discovered between privacy and security, but in the situation with gun detection AI, there seems to be no choice for people whether to opt in or out. This raises many questions as to how much these AI products could harm its users despite the proposed benefits.

Looking Forward

While AI technologies have the potential to be useful one day, it seems that they are not ready to be the optimal solution to ending gun violence in America. These solutions need to surpass current technical limitations, become scalable, and address many ethical questions before they become widespread. Companies and schools can spend their money better elsewhere in providing mental health resources for students and employees, donating money to support institutional security measures, and lobbying for more comprehensive gun control laws.

Pegasus, Phantom, and Privacy

Pegasus, Phantom, and Privacy
Kaavya Shah | July 7, 2022

NSO Group’s mythically monikered technologies are putting American privacy at risk, but tech users can rest assured for the time being.

The Israeli cybersecurity company develops surveillance technologies for government agencies.

Earlier this year, the New York Times broke news of the FBI’s secret use of NSO technologies, claiming it was purely to understand how leading cyberattack technologies function. However, there has been great controversy over the use of this software, with huge implications to personal privacy.

Who is NSO and what is Pegasus?

NSO Group is a cybersecurity company from Israel that creates technologies to “prevent and investigate terrorism and crime” (https://www.nsogroup.com). One of their more popular softwares is Pegasus, which can hack into the contents of any iPhone or Android without sending a suspicious link. This gives access to almost all of the contents on a phone, ranging from photos and messages, to location tracking and recording capabilities. However, there is one notable flaw in Pegasus – the Israeli government requires that by design, it cannot be used to hack into phones with an American number; this design prevents both Americans, and non-Americans, from surveilling on American phones.

This software has been incredibly useful to detect and prevent criminal and terrorist plots, but governments have also deployed it aganist journalists, activists, and protestors. Because of the many documented cases of NSO surveillance tools being used to spy, there is widespread apprehension of the creation and use of its technologies.

What is the FBI doing with it?

Given that Pegasus is inoperable on American numbers, why is there a conversation about NSO Group working with the FBI? In order for the American government to actually test out any software, NSO demonstrated a similar software called Phantom, which received special permission from the Israeli government to hack into American devices and could only be sold to American government agencies.

With this new software that could be used in America, the FBI and the Justice Department set out to determine if Phantom could be used in accordance with American wiretapping laws, strengthened by 4th Amendment’s constitutional protection from unreasonable searches and seizures. Consider CalECPA, the California Electronic Communications Privacy Act, which specifies that searching and seizing electronic information also requires a warrant that is supported by an affidavit; because of this, even if Phantom gives the technical capabilities to obtain information, there is still a high legal barrier to obtain a warrant.

However, there was public outrage over the fact that the FBI had purchased and used spyware from NSO. Due to this, the New York Times has filed a Freedom of Information lawsuit against the FBI, demanding that information on the FBI’s testing and use of NSO tools be released before August 31, 2022.

Should we be worried?

Ultimately, the FBI decided against purchasing Phantom from NSO Group to surveil Americans. In addition to this decision behind closed doors, in November of 2021, the Biden administration added NSO Group to the Commerce Department’s Entity List of blacklisted companies, severely limiting NSO’s ability to use American tech. This decision has been met with controversy, as the Israeli government took this as a political attack to the country, while the Biden administration argues that the decision was made purely on the basis of supporting human rights.

Lockdown Mode, a new feature in iOS 16, secures Apple devices from outsiders like NSO’s Pegasus.

Additionally, Apple users can rejoice, with the recent announcement of “Lockdown Mode” on July 6. Apple produced this security feature as a direct response to University of Toronto’s Citizen Lab’s research, which showed that Pegasus could hack into iPhones through the iMessage feature. This feature was added specifically for people who may fear a Pegasus or Phantom attack, resulting in extreme device functionality limitations that severely reduce the potential of a successful cyberattack, effectively strengthening your security. However, it is important to note that this feature will only benefit those who can actually afford the expensive Apple devices. While the Belmont Report’s principle of justice points out that ethical solutions should provide equal treatment to all people, technological improvements continue to be restricted due to their costs, widening the injustices of access to technology. So, even though there is a solution to protect individuals from Pegasus and Phantom attacks, ownership of these devices with these capabilities is entirely dependent upon a person’s disposable income.

NSO Group’s technologies are providing government agencies across the world with highly invasive cybersecurity technologies, with little to no regulation on the use of the softwares. However, for the time being, American cell phone owners–especially American iPhone owners–do not have to worry about a Phantom attack any time soon.

References

  1. https://www.cnbc.com/2022/03/03/apple-and-fbi-grilled-by-lawmakers-on-spyware-from-israeli-nso-group.html
  2. https://www.nytimes.com/2022/01/28/magazine/nso-group-israel-spyware.html?searchResultPosition=7
  3. https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201520160SB178
  4. https://www.cnet.com/news/privacy/us-sanctions-nso-group-over-pegasus-spyware/
  5. https://www.washingtonpost.com/technology/2022/07/06/apple-spyware-lockdown-pegasus/
  6. https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf

Images

  1. https://www.reuters.com/world/us/exclusive-us-lawmakers-call-sanctions-against-israels-nso-other-spyware-firms-2021-12-15/
  2. https://9to5mac.com/2022/07/06/iphone-lockdown-mode-ios-16/

Data After Death: Taking Your Data Too Far 

Data After Death: Taking Your Data Too Far 
Elias Saravia | July 7, 2022

With the rise of artificial intelligence, people’s personal data is now being exploited more than ever for profit in the entertainment industry, especially after their death. In this new technological wave, it is important to bring awareness to the ethical issues behind data collection and utilization of individuals after their death.

AI in Entertainment: 

On December 20th, 2019, Star Wars: Rise of the Skywalker was released and utilized artificial intelligence technology known as a Deepfake to bring Princess Leia “back to life”, bringing both excitement and uneasiness to viewers following Carrie Fisher’s death on December 23rd, 2016. This summer 2022, Selena, a female Latin pop singer whose death was 27 years ago, will be releasing a new album with 3 new AI-made tracks using audio files from when she was thirteen.

As continuous innovations are made in the field of artificial intelligence, so is the use of this technology in the entertainment industry to recreate datafied versions of people after their death. This begs the question: What are the ethical issues behind the data collection and utilization of individuals after their death?

Deepfakes and AI 

One particularly infamous technology used in the entertainment industry, for example, is Deepfakes, a type of neural network called an autoencoder that uses prior data to “reconstruct people” by swapping faces or manipulating facial expressions. It can also “[analyze] the voice records of a person…and [imitate] the voice and the accent, pitch, and pace of a speech” (TRWorld). The utilization of deepfakes continues to rise and “the number of expert-crafted video deepfakes double every six months” (Cybernews).

Ethical and Privacy Concerns 

Ethical issues arise when analyzing the utilization of this technology and someone’s data after death. According to the Belmont Report, a list of basic ethical principles and guidelines that address ethical issues, there are concerns for respect of persons and justice. Respect for Persons indicates that the data of humans must be respected and that they should have full autonomy over what can or cannot be collected from them. Therefore, before utilizing someone’s data after death, it is important to request informed consent (possibly prior to their death or to the family of the individual) for the utilization of their data. Meanwhile, justice reminds us to ask who receives the benefits of this and who bears the burden? If someone’s data is taken advantage of for the utilization of someone’s personal benefit and hurting another person or community, this would be a violation. For example, utilizating someone’s face to spread misinformation on the internet after the death (claiming a statement they possibly made in the past). In the case of Carrie Fisher and Selena, is important to ask whether or not they consented to the use of their data after death, giving them respect, and using it fairly.

Furthermore, we can apply the OECD Principles of Corporate Governance to see violations of principles which include but are not limited to use limitation, openness, and accountability. In order to avoid violating these principles, it is important to not take advantage of someone’s data after death and limit their use to only for necessary purposes, be transparent about the collection and use of their data to the public, and be accountable for any mishaps with someone’s data, respectively.

Furthermore, privacy and security concerns and violations can be seen with the utilization of this technology. In one occurrence, a deepfake recorded a video of MIT Prof. Sinan Aral endorsing an investment fund’s stock-trading algorithm without his awareness or consent (Wall Street Journal). Technology like this can harm a person’s identity and reputation as well as spread misinformation. Imagine having your data and information used in this manner even after death? At the moment, the United States has no legal protections over someone’s data after their death. This is also known as post-mortem privacy protection, a federal law that grants the a person the ability to control the dissemination of personal information after their death, which is unavailable due to the limited legal protections with regards to someone’s data.

Moving Forward 

Although there are limited legal protections to your data after death, bringing awareness to the ethical issues and privacy concerns is a good start. In addition it’s important to take part in movements towards requesting legal protections for someone’s data rights in the US. One possibility is advocating for government documents to include how you want your data to be used after death (just like consenting to organ donations). Lastly, following practices to protect your privacy such as limiting your digital footprint and understanding companies privacy policies and practices.

Citations 

https://www.inputmag.com/culture/new-selena-album-ai-singer-voice-unreleased-songs

https://www.wsj.com/articles/deepfake-technology-is-now-a-threat-to-everyone-what-do-we-do 11638887121

https://www.theverge.com/2020/2/5/21123850/star-wars-rise-of-skywalker-vfx-carrie-fisher-leia practical-effects

https://www.esquire.com/entertainment/movies/a30429072/was-carrie-fisher-cgi-in-star-wars-the-rise-of-skywalker/

https://en.wikipedia.org/wiki/Deepfake#:~:text=Deepfakes%20rely%20on%20a%20type,image %20from%20the%20latent%20representation.

https://www.trtworld.com/magazine/fun-or-fraud-the-rise-of-deepfake-voice-technology-can-ma ke-the-dead-sing-48373

https://slate.com/technology/2019/08/vocal-deepfakes-music-human-machine-collaboration.html https://news.ucr.edu/articles/2021/08/04/artificial-intelligence-bringing-dead-back-life-should-it

Privacy in the Workplace Metaverse of Madness

Privacy in the Workplace Metaverse of Madness
Anonymous | July 7, 2022

Two people with augmented reality headsets collaborating on an architectural design

The privacy implications of the metaverse in the workplace must be thoroughly understood and addressed before it becomes our new reality.

What is the Metaverse?

First coined by Neal Stephenson in his novel Snow Crash, the metaverse was a virtual reality escape for those living in a dystopian society, whose physical representations were replaced by avatars.

What was once just an idea based in science fiction is now becoming a virtual reality. Across the professional landscape, rather than promoting it as an escape from reality as shown in the immensely popular game VR Chat, technology companies are racing to infiltrate the corporate world with promises of efficiency never seen before and innovative collaboration experiences.

Paving the Way for Remote Work

According to Ladders, “25% of all professional jobs in North America will be remote by end of next year”.

The COVID-19 pandemic has forced employers to reconsider which roles in their companies need to be in person. It has also precipitated what is referred to as the Great Resignation – employees taking a step back to reassess what is important in life and making drastic career decisions as a result. This shift to remote work is forcing companies to get creative in how they can optimize the experience while also ensuring accountability. It’s no surprise that the metaverse fits nicely into this equation.

Whose metaverse pool will you be swimming in? Will you dip your toe in the water with augmented reality? Or will you dive head first into an immersive virtual reality? The choice may depend on your employer, several of whom have already created the Metaverse Standards Forum.

Privacy Implications

Regardless of how immersive the experience is, be it a pair of glasses worn around the office to facilitate virtual collaboration or a headset you wear while in your pajamas on the couch, they all share similar implications when it comes to your privacy. Shoulder surfing may be a thing of the past as employers will get a front-row view of your experience.

Shoulder surfing with one person at their computer and another watching over their shoulder

The metaverse implementations and policies are still evolving, but we can look to Meta’s Horizon Workrooms as an example of how companies may address privacy concerns going forward. Horizon Workrooms leverages the Meta Quest headset to provide companies with the ability for their workers to collaborate with each other in a virtual reality environment.

Horizon Workrooms image containing 4 people in a virtual environment seated around a desk

If we overlay Solove’s Taxonomy of Privacy on top of the Quest’s privacy policy, we can get a better understanding of the true scope of impact for the risks to individual privacy that will need to be balanced with the increase in collaboration and accessibility.

In this virtual workroom, the employee, and those they interact with, are under constant surveillance in a way that could not be practically implemented in years past. Cameras already exist in the workplace but are limited in the level of detail they capture. The Quest collects additional data points not regularly seen in privacy policies, including physical features, how the subject moves in a physical space, and detailed speech. The observed and recorded dimensions of personal privacy have expanded beyond observations from an imperfect third-person perspective, to high-quality first-person. Layering artificial intelligence on top of this data, aggregated with both quantitative and qualitative output of employees, can be enticing to efficiency obsessed employers.

Given the sensitive nature of the data at a level of detail incapable in years past, this information could be valuable not just to the company in its never-ending effort to increase profits via employee efficiency, but also serve as a foundation for psychological analysis and behavior research – not to mention pose a serious security risk by nefarious individuals or organizations seeking to gain a competitive advantage or compromising material.

Within the Quest’s privacy policy, they explicitly state that there are secondary uses for this information, such as for the improvement of speech recognition systems. While created in 1974, the Belmont Report retains its relevance even when considering such modern innovations. If and when the improvements to these systems are made, who will be the beneficiaries of such advancements? Per the report’s principle of Justice, the policy could go farther into how this data could be leveraged to improve the lives of those with speech or language disabilities, rather than only those privileged enough to be able to afford one of these devices.

While the metaverse can close physical gaps between employees to facilitate global collaboration, this increase in accessibility introduces new challenges. In the United States of America, the Americans with Disabilities Act has protected those with disabilities and ensured them a safe and productive workplace. Additional protections for those potentially disadvantaged by the use of these devices would need to be evaluated to address accessibility concerns.

It would also be reasonable to assume this is a shared device that not only an employee could use within Horizon Workrooms, but their children for their favorite game. The privacy policy does not explicitly differentiate the collection of data from different users at the device level versus the application level. There is a risk of information dissemination of those without the autonomy to consent.

The Meta Quest policy also states that they collect identifying information to ensure the safety of its users. This results in the lack of anonymity along with the other data points being collected and thus increasing the impact of information disclosure or invasion of privacy for the users. It would be reasonable to expect this type of moderation leveraging personally identifiable information to be replicated across other company’s metaverse to ensure a safe environment that encourages an overall adoption of this technology.

Proceed, with Caution

Breaking down the barriers of physical limitations by enhancing our reality, or establishing a virtual one, can help many achieve great things and has the potential to further innovation and collaboration by leaps and bounds.

And as enticing as that may be, the privacy implications of the metaverse cannot be an afterthought in its implementation. This would necessitate the slowdown of technological advancement to ensure proper caution is exercised. Unfortunately, this may not be an option as competition for the arbiter of the metaverse heats up in order to get their share of a multi-trillion dollar market.

Ethics in Data Management by Design

Ethics in Data Management by Design
Anonymous | July 7, 2022

External data has become an essential component of a company’s data strategy, and data brokers are an integral part of the landscape. However, are the policies and regulations able to catch up with the evolving data brokerage domain and data collection methods? Do companies have the same commitment to safeguarding external data the same way they protect their internal data? And even if they are able to capitalize on some of the ambiguities surrounding data management, should they do so at the price of ethics?

Current state

Companies are increasingly exploiting data offered by data providers or obtained through different data subscriptions and in many cases, it has become an integral part of companies’ data strategy. Enterprises utilize external data not only for marketing purposes, but also for a variety of internal use cases, such as ensuring the safety of the company’s personnel and executives, identifying and mitigating reputational risks, and benchmarking company performance across a variety of dimensions, among others. In addition to utilizing the data acquired via data brokers, many organizations also utilize open/public data.

Contractual responsibilities between the data provider and the consuming companies govern the use of external data to some extent. However, processing external data, merging disparate data sources, and augmenting external data with data received from internal or public sources poses a new set of challenges to the regulations and controls that companies must implement. While there is clear guidance from GDPR, FTC, CalOPPA, and others, several of the above-mentioned domains still lack well-defined policies and remain ambiguous.

In addition to the formal constraint, there is also an opportunity to assess the ethical and behavioral implications of intercompany data management and consumption. Even if there is some ambiguity in data management, the question is whether we should expect businesses to have a higher ethical standard and awareness. In many circumstances, inter-company data privacy groups are primarily concerned with the management of personally identifiable information, with a focus on internal business data. Compliance checks undertaken by these groups are frequently perceived as overhead by delivery teams, and even as an obstacle or significant slowdown in project performance. While many companies have incorporated privacy risk and impact assessments into their operations, as long as these reviews are perceived as an impediment and the performance is measured and driven by time to market, these reviews could potentially generate a false sense of security.

A brighter future?

With the evolution of formal regulations, which will hopefully provide more clear guidance addressing the data brokerage domains and companies’ practices pertaining to external data management and consumption, it would be fantastic to see companies not only improve their internal practices and regulations to comply with the formal regulations but also drive more ethical data practices. Some of these concepts ought to be integrated into the performance measurement of the workforce in order to encourage the installation of ethical data management and subsequent behavioral and cultural shifts. It would be great to see ethics built into the design processes and privacy by design cease being an afterthought and become an integral component of the enterprise data architecture guiding principles.

Wearable devices offer new opportunities and new challenges for drug discovery

Wearable devices offer new opportunities and new challenges for drug discovery
Katy Scott | July 7, 2022

Not just Watches
Including devices besides smart watches, health and wellness related wearables are expected to continue to improve as a technology and increase market growth in coming years (Loucks, Bucaille, Stewart, & Crossnan, 2021). The Food and Drug Administration issued Emergency Use Authorizations for 6 wearable monitoring devices to protect health care worker safety against COVID in 2019 (Food and Drug Administration, 2021). This is a placement diagram for one of those devices, a Vital Signs Monitoring System, which tracks electrocardiogram data and downloads it to a smartphone:

Placement for Vital Signs Monitoring System

Emerging Opportunities
With increasing ubiquity and accuracy of wearable health monitors, it follows that investigators are exploring options to leverage them to improve clinical trials management. Between 35% and 65% of the cost of a clinical trial goes to site focused costs (Serkaya, Wong, Jessup, & Beleche, 2016). A shift toward remote administration using automated data collection and delivery through wearable devices could drive substantial savings. Double digit percentages in development costs could have disruptive effects in the health care industry, making more therapies available to patients at costs they can afford.
In addition to cost savings, at-home biometrics monitoring may overcome barriers to access. For example, some patients decline or attrit from clinical trials due to costs or inconvenience of traveling to a trial site (Thoma, Farrokhyar, McKnight, & Bandari, 2010). As well, some underserved communities enroll less often in trials due to mistrust of the medical experiments (Institute of Medicine (US), 2012); they may feel more comfortable participating from home. Furthermore, some patients, like those suffering from a severe subgroup of Myalgic Encephalomyelitis, cannot make office visits without suffering harm (Centers for Disease Control and Prevention, 2019). At-home monitoring could make study of their disease and progress toward treatments possible.

Unique Challenges
The use of wearables data for clinical trials experiments presents unique challenges compared to a traditional clinical trial. Data integrity and security must be ensured throughout data storage, transmission, and retention. If the data pipeline relies on personally owned Wi-Fi networks and smartphones or tablets, a variety of security solutions would need to be established and maintained throughout trial operation. Reliance on personal infrastructure for data transmission could introduce bias against individuals or communities with less access to internet connectivity or smart devices.

A wearables experiment may need to be designed differently compared to a traditional trial. For example, individual sensors would be used continuously per patient, amplifying potential measurement bias. As well, patients may have more visibility to regular biometrics monitoring which could introduce placebo effects. Finally, the experiments may see greater attrition in subjects due to loss of devices. With measurement assets not under control of the clinic, they risk or damage, theft, or loss before completion of the trial. Investigators would need to plan for all these effects in their experimental design.

A Smart Future
The healthcare industry is ready for the cost savings and equity in recruitment these devices can offer. As long as data security and experimental designs keep pace with the development of wearables, these fun and fashionable accessories could revolutionize medicine in coming years.

Smart devices

References

Centers for Disease Control and Prevention. (2019, November 19). Myalgic Encephalomyelitis/Chronic Fatigue Syndrome- Severely Affected Patients. Retrieved from Centers for Disease Control and Prevention: https://www.cdc.gov/me-cfs/healthcare-providers/clinical-care-patients-mecfs/severely-affected-patients.html

Food and Drug Administration. (2021, July 15). Remote or Wearable Patient Monitoring Devices EUAs. Retrieved from U.S. Food and Drug Administration: https://www.fda.gov/medical-devices/coronavirus-disease-2019-covid-19-emergency-use-authorizations-medical-devices/remote-or-wearable-patient-monitoring-devices-euas

Institute of Medicine (US). (2012). Public Engagement and Clinical Trials: New Models and Disruptive Technologies: Workshop Summary. (p. Working with Underserved Communities). Washington DC: National Academies Press (US).
Loucks, J., Bucaille, A., Stewart, D., & Crossnan, G. (2021, December 1).

Wearable technology in health care: Getting better all the time. Retrieved from Deloitte Insights: https://www2.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2022/wearable-technology-healthcare.html

Serkaya, A., Wong, H.-H., Jessup, A., & Beleche, T. (2016, April). Key cost drivers of pharmaceutical clinical trials in the United States. Retrieved from NIH National Library of Medicine: https://pubmed.ncbi.nlm.nih.gov/26908540/
Thoma, A., Farrokhyar, F., McKnight, L., & Bandari, M. (2010, June ). How to optimize patient recruitment. Retrieved from NIH National Library of Medicine: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2878987/

 

Tesla Insurance Driver Safety Score: Potentially Save a Quick Buck, but at What Cost?

Tesla Insurance Driver Safety Score: Potentially Save a Quick Buck, but at What Cost?
Ethan Nguonly | July 7, 2022

Teslas are well known for their state-of-the-art monitoring abilities, from location data, battery usage, pressure monitoring, and sentry mode recording. Tesla also offers car insurance for its users at a discounted rate and insured drivers also have the option to enroll in real-time driver safety monitoring. Through this program, Tesla uses its built-in logging and monitoring system in order to evaluate how safely a driver drives and generates a Driver Safety Score from this data. Drivers start off with a safety score of 90, which can go up or down depending on the driver’s behavior, such as speed of braking, sharpness of turns, etc.

The five safety factors that are used to evaluate a driver’s safety score are Forward Collision Warnings per 1,000 Miles, Hard Braking, Aggressive Turning, Unsafe Following, and Forced Autopilot Disengagement. These metrics are used together to compute a Predicted Collision Frequency score as follows:

The PCF is then converted into a 0 to 100 Safety Score using the following formula:

Safety Score = 115.382324 – 22.526504 * PCF

A driver’s monthly insurance premium is then determined based on their safety score for the previous month. This allows Tesla to offer discounts to users based on their driving habits, incentivizing and rewarding users for driving safely. On the other hand, users who are deemed to not be safe drivers, would have their insurance premiums increased. Tesla has stated it estimates those deemed to be “average” drivers would save between 20% to 40% on their monthly premiums compared to competitor insurance offerings, while “good” drivers would save between 30% to 60%.

Other major car insurance companies have also started offering telematics-based insurance rates as well. However, where Tesla beats its competitors is its increased transparency of what metrics are used to evaluate drivers and how. While Tesla openly provides its driver safety score formula, other auto insurance companies do not. Some argue that Tesla’s transparency about how it evaluates drivers allows the system to be easily gamed, while others praise Tesla for being more open and informing drivers how their driving data will be used.

While lower insurance premiums and safer drivers sound like a win-win situation for all parties, this type of continual vehicle monitoring system does raise serious concerns about user privacy. For example, what else is this data being used for? What if this data were to be leaked or sold to a third party such as an employer, who could then make the decision not to hire an individual based on their driving behavior? Given the sophistication of Teslas, what other data is being collected outside of the metrics Tesla has openly outlined? Greater transparency and education about data collection, usage, and retention of vehicle telematics are needed as driver-based insurance programs become more widely available. For now, whether or not to opt into a real-time monitoring-based car insurance program is a personalized choice. Let’s just hope that the decision to opt-in now does not have unexpected lasting consequences in the long run.

Tesla states that if you sell your vehicle, your previous driver safety score will not be used for a new Tesla vehicle that you purchase.

References

https://www.tesla.com/support/safety-score

https://www.vice.com/en/article/akvwge/tesla-drivers-say-they-can-easily-cheat-teslas-safety-score

https://electrek.co/2022/04/01/tesla-insurance-launches-driver-safety-score-in-california-educational-purposes/#:~:text=In%20October%2C%20Tesla%20finally%20launched,between%2030%25%20to%2060%25.

The Deceptive Appeal of Buy Now Pay Later

The Deceptive Appeal of Buy Now Pay Later
Anonymous | July 7, 2022

Buy Now, Pay Later (BNPL) companies like Affirm, Klarna, and Afterpay offer consumers with the enticing option to pay for their online purchases in interest-free installments. The BNPL industry has grown rapidly in the past several years, accelerated by the increase in online shopping. Many new players are joining the scene, with Apple Pay Later set to launch in Fall 2022. With this, consumers can now split large costs into smaller, more manageable payments with a click of a button. You can now pay use Affirm to purchase a laptop, buy a brand new wardrobe through Klarna, and even finance your groceries through Afterpay.

At the same time, since consumers now have multiple options to pay over time, it can encourage them to spend impulsively and buy items they cannot afford. It has now become an issue where people are buying everyday household items in installments. Unlike credit cards, the BNPL industry is largely unregulated. They operate outside the legal definition of a loan product and are not subjected to certain US consumer finance regulations (Nguyen, 2021). The terms for each company vary – with some including late fees but not interest, some reporting to credit bureaus and some not. For example, while Afterpay doesn’t charge interest, it collected $64 million in late fees from users in the past 12 months. (Fussell, 2021).

In order to understand the ethical challenges involved, we can apply the Belmont Principles. In terms of respect for persons – we can see that BNPL does not explicitly provide informed consent to their users. For one, the services are deceptively marketed as a payment option, rather than the loan paid in installments that it is. There is also a major lack of transparency in the BNPL process. On Klarna’s website, it simply states “Split the cost of your purchase into 4 interest-free payments, paid every 2 weeks. No interest. No catch.” Affirm explains how their process works in three steps: 1) Go Shopping 2) Choose Your Payment Terms 3) Make Your Payments. Both fail to provide adequate information and explain if and how a soft credit check is performed or the consequences of missing or paying an installment late. As a result, users are unaware of the full terms and conditions before agreeing to these installment loans.

The principle of beneficence states that any research should aim to maximize possible benefits and minimize potential harms. While the industry does provide a service to users, allowing for a supposedly interest-free alternative to those without a credit card, the companies are at the end of the day, profit driven. Without regulation in place, BNPL companies are free to impose fees and apply tactics that may encourage consumers to overspend and accumulate debt without consequences.

The third principle of justice advocates for fair treatment for all. Research has found that BNPL users have consisted mostly of younger consumers, as well as those who are low income. At the same time, BNPL is “heavily marketed influencers and brands on TikTok and Instagram” (Bote, 2022). There are currently no safeguards in place for children or younger users, when they are already vulnerable with little credit history and limited financial literacy. This leaves the younger generation susceptible to the dangers of increasing debt involved with BNPL services. Users can easily open multiple BNPL lines to pay for purchases, as opposed to the more complex process of applying for a credit card, getting approved, and then being able to make purchases.

While BNPL services shows no signs of stopping, governments have finally taken notice and have been beginning to take steps towards change. In November 2021, the House Financial Services Committee held a hearing where consumer advocates called for “tighter regulation and more data on how often users default, the potential long-term impact on credit scores, and tighter rules around credit approval” (Fussell, 2022). The Consumer Financial Protection Bureau (CFPB) recently issued a series of orders to five companies collect information on the risks and benefits. As the industry continues to grow, governments need to take action to safeguard consumers and prevent the continued overspending and accumulation of debt.