The Appeal and the Dangers of Digital ID for Refugees Surveillance

The Appeal and the Dangers of Digital ID for Refugees Surveillance
By Joshua Noble | October 29, 2021

Digitization of national identity is growing in popularity as governments across the world seek to modernize access to services and streamline their own data stores. Refugees, especially those coming from war-torn areas where they have had to flee at short notice with few belongings or those who have made dangerous or arduous journeys, often lack any form of ID. Governments are often unwilling to provide ID to the stateless since they often have not determined whether they will allow a displaced person to stay and in some cases the stateless person may not want to stay in that country. Many agencies are beginning to explore non-state Digital ID as a way of providing some identity to stateless persons, among them the UNHCR, Red Cross, International Rescue Committee, and the UN Migration Agency. For instance, a UNHCR press release states: “UNHCR is currently rolling out its Population Registration and Identity Management EcoSystem (PRIMES), which includes state of the art biometrics.”

The need for a way for a stateless person to identify themselves is made all the more urgent by approaches that governments have begun to take to identifying refugees. Governments are increasingly using migrants’ electronic devices as verification tools. This practice is made easier with the use of mobile extraction tools, which allow an individual to download key data from a smartphone, including contacts, call data, text messages, stored files, location information, and more. In 2018, the Austrian government approved a law forcing asylum seekers to hand over their phones so authorities could check their origin, with the aim of determining if their asylum request should be invalidated if they were found to have previously entered another EU country.

NGO provided ID initiatives may convince governments to abandon or curtail these highly privacy invasive strategies. But while the intention of these initiatives is often charitable and seeking to help provide assistance to refugees, they have the challenge of many attempts to uniquely identify users or persons: access to services is often tied to the creation of the ID itself. For a person who is stateless, homeless, and in need of aid, arriving in a new country and shuttled to a camp, this can feel like coercion. There is an absence of informed consent on the part of refugees. The agencies creating these data subjects often fail to adequately educate them on what data is being collected and how it will be stored. Once data is collected, refugees face extensive bureaucratic challenges if they want to change or update that data. Agencies creating the data offer little in the way of transparency around how data is stored, used, and offered and most importantly, with whom it might be shared both inside and outside of the organizations collecting the data.

Recently as NGOs and aid agencies fled Afghanistan as the US military abandoned the country, thousands of Afghans who had worked with those organization agencies began to worry that biometric databases and their own digital history might be used by the Taliban to track and target them. In another example of the risks of using biometric data, the UNHCR shared information on Rohingya refugees with the government of Bangladesh. The Bangladeshi government then sent that same data to Myanmar to verify people for possible repatriation. Both of these cases identify the real and present risk that creating and storing biometric data and ID can pose.

While the need for ID and the benefits that it can provide are both valid concerns, the challenge of ad hoc and temporary institutions providing those IDs and collecting and storing data associated with them presents not only privacy risks to refugees but often real and present physical danger as well.

UNHCR. 2018. “UNHCR Strategy on Digital Identity and Inclusion” [https://www.unhcr.org/blogs/wp-content/uploads/sites/48/2018/03/2018-02-Digital-Identity_02.pdf](https://www.unhcr.org/blogs/wp-content/uploads/sites/48/2018/03/2018-02-Digital-Identity_02.pdf)

IOM & APSCA. 2018. 5th border management and identity conference (BMIC) on technical cooperation and capacity building. Bangkok: BMIC. [http://cb4ibm.iom.int/bmic5/assets/documents/5BMIC-Information-Brochure.pdf](http://cb4ibm.iom.int/bmic5/assets/documents/5BMIC-Information-Brochure.pdf).

Red Cross 510. 2018 An Initiative of the Netherlands Red Cross Is Exploring the Use of Self Managed Identity in Humanitarian Aid with Tykn.Tech. [https://www.510.global/510-x-tykn-press-release/](https://www.510.global/510-x-tykn-press-release/)

UNHCR. 2018. Bridging the identity divide – is portable user-centric identity management the answer? [https://www.unhcr.org/blogs/bridging-identity-divide-portable-user-centric-identity-management-answer/](https://www.unhcr.org/blogs/bridging-identity-divide-portable-user-centric-identity-management-answer/)

Data&Society 2020, “Digital Identity in the Migration & Refugee Context” [https://datasociety.net/wp-content/uploads/2019/04/DataSociety_DigitalIdentity.pdf](https://datasociety.net/wp-content/uploads/2019/04/DataSociety_DigitalIdentity.pdf)

India’s National Health ID – Losing Privacy with Consent

India’s National Health ID – Losing Privacy with Consent
By Anonymous | October 29, 2021

Source: Ayushman Bharat Digital Mission (ABDM)

“Every Indian will be given a Health ID,” Prime Minister Narendra Modi promised on India’s Independence Day this year, adding, “This Health ID will work like a health account for every Indian. Your every test, every disease – which doctor, which medicine you took, what diagnosis was there, when they were taken, what was their report – all this information will be included in your Health ID.”[1] The 14 digit Health ID will be linked to a health data consent manager – used to seek patient’s consent for connecting and sharing of health information across healthcare facilities (hospitals, laboratories, insurance companies, online pharmacies, telemedicine firms).

Source: Ayushman Bharat Digital Mission (ABDM)

Technology Is The Answer, But What Was The Question?
India’s leadership of the landmark resolution on digital health by the World Health Organization (WHO) has been recognized globally. With a growing population widening the gap between number of health‑care professionals and patients (0.7 doctors per 1000 patients[3]) and with increasing cost of health care, investing in technology to enable health‑care delivery seems to be the approach to leapfrog public health in India. And, the National Digital Health Mission (NDHM) is India’s first big step in improving India’s health care system and a move towards universal health coverage.

PM Modi says “This mission will play a big role in overcoming problems faced by the poor and middle class in accessing treatment”[4]. It aims to digitize medical treatment facilities by connecting millions of hospitals. The Health ID will be free of cost and completely voluntary. Citizens will be able to manage their records in a private, secure, and confidential environment. The analysis of population health data will lead to better planning, budgeting and implementation for states and health programmes, helping save costs and improve treatment. But, with all its well intentions, this hasty rush to do something may actually be disconnected with ground reality and challenges abound.

Source: Ayushman Bharat Digital Mission (ABDM)

Consent May Not Be The Right Way To Handle Data Privacy Issues
Let’s start with ‘voluntary’ consent. The government might be playing a digital sleight of hand here. Earlier this month, the Supreme Court of India issued notices to the Government seeking removal of the requirement for a National ID (Aadhar) from the government’s CoWin app. The CoWin app is used to schedule COVID vaccine appointments. For registration, Aadhar is voluntary (you can use a Driver’s License), but the app makes Aadhar required to generate a certificate[5]. You must be thinking what National ID has to do with National Digital Health ID? During its launch of National Digital Health ID, the government automatically created health ids for individuals that used the National ID for scheduling a vaccine appointment. 122 million (approx. 98%) of 124 million IDs generated have been for people registered on CoWin. Most recipients of the vaccine were not aware that their unique Health ID had been generated[6].

Then there is the issue of ‘forced’ consent. Each year, 63 million Indians are pushed into poverty due to healthcare costs[7] i.e. two citizens every second, and 50% of the population lives in poverty (3.1 USD per day). One of the stated benefits of Health ID is that it will be used to determine distribution of benefits under Government’s health welfare schemes. So if you are dependent on Government schemes or looking to participate, you have to create a Health ID and link it with the National ID. As Amulya Nidhi of the non-profit People’s Health Movement puts it “People’s vulnerability while seeking health services may be misused to get consent. Informed consent is a real issue when people are poor, illiterate or desperate[8].”

Source: Ayushman Bharat Digital Mission (ABDM)

Good Digital Data Privacy Is Hard To Get Right
Finally, there is the matter of ‘privacy regulation’, the NDHM depends on a Personal Data Protection Bill (PDP) which overhauls the outdated Information Technology Act 2000. After two years of deliberation the PDP is yet to be passed, and 124 million Health IDs have already been generated. Moreover, principles such as qualified consent and specific user rights have no legal precedence in India[9]. In its haste, the Government has moved forward without a robust legal framework to protect health data. And without a data protection law or an independent data protection authority, there are few safeguards and no recourse when rights are violated.

The lack of PDP could lead to misuse of data by private firms and bad actors. It may happen that an insurance agency chooses to grant coverage only to customers willing to link their Health IDs and share digitised records. Similarly, they may offer incentives to those who share medical history and financial details for customised insurance premium plans[10]. Or, they may even reject insurance applications and push up premium rates for those with pre-existing medical conditions. If insurance firms, hospitals etc. demand health IDs, it will become mandatory, even if not required by law.

The New Normal: It’s All Smoke and Mirrors
In closing, medical data will lead to better planning, cost optimization, and implementation for health programs. But without a robust legal framework, the regulatory gap poses implementation challenges for a National Digital Health ID. Moreover, the government has to rein in intimidatory data collection practices else people will have no choice but to consent to access essential resources which they are entitled to. Lastly as the GDPR explains, consent is freely given, specific, informed and an unambiguous indication of the data subject’s wishes. The Government of India needs to decouple initiatives and remove any smoke and mirrors, so people are clearly informed about what they are agreeing to in each case. In the absence of such efforts, there will be one added ‘new normal’ for India – losing privacy with consent.

References:
1. Mehrotra Karishma (2020). PM Announces Health ID for Every Indian. The Indian Express. Accessed on October 25, 2001 from: indianexpress.com/article/india/narendra-modi-health-id-coronavirus-independence-day-address-6556559/
2. Bertalan Mesko et al (2017). Digital Health is a Cultural Transformation of Traditional Healthcare. Mhealth. Accessed on October 25, 2001 from: www.ncbi.nlm.nih.gov/pmc/articles/PMC5682364/
3. Anup Karan et al (2021). Size, composition and distribution of health workforce in India. Human Resources for Health. Accessed on October 25, 2001 from: human-resources-health.biomedcentral.com/articles/10.1186/s12960-021-00575-2
4. Kaunain Sheriff (2021). PM Modi launches Ayushman Bharat Digital Mission. The Indian Express. Accessed on October 25, 2001 from: indianexpress.com/article/india/narendra-modi-pradhan-mantri-ayushman-bharat-digital-health-mission-7536669/
5. Ashlin Mathew (2021). Modi government issuing national health ID stealthily without informed consent. National Herald. Accessed on October 25, 2001 from: www.nationalheraldindia.com/india/modi-government-issuing-national-health-id-stealthily-without-informed-consent
6. Regina Mihindukulasuriya (2021). By 2025, rural India will likely have more internet users than urban India. ThePrint. Accessed on October 25, 2001 from: theprint.in/tech/by-2025-rural-india-will-likely-have-more-internet-users-than-urban-india/671024/
7. Vidhi Doshi (2018). India is rolling out a health-care plan for half a billion people. But are there enough doctors? Washington Post. Accessed on October 25, 2001 from: www.washingtonpost.com/world/2018/08/14/india-is-rolling-out-healthcare-plan-half-billion-people-are-there-enough-doctors/
8. Rina Chandran (2020). Privacy concerns as India pushes digital health plan, ID. Reuters. Accessed on October 25, 2001 from: www.reuters.com/article/us-india-health-tech/privacy-concerns-as-india-pushes-digital-health-plan-id-idUSKCN26D00B
9. Shahana Chatterji et al (2021). Balancing privacy concerns under India’s Integrated Unique Health ID. The Hindu. Accessed on October 25, 2001 from: www.thehindubusinessline.com/opinion/balancing-privacy-concerns-under-indias-integrated-unique-health-id/article36760885.ece
10. Mithun MK (2021). How the Health ID may impact insurance for patients with pre-existing conditions. The News Minute. Accessed on October 25, 2001 from: www.thenewsminute.com/article/how-health-id-may-impact-insurance-patients-pre-existing-conditions-156306

Social Media Analytics for Security : Freedom of Speech vs Government Surveillance

Social Media Analytics for Security : Freedom of Speech vs Government Surveillance
By Nitin Pillai | October 29, 2021

Introduction

The U.S. Department of Homeland Security (DHS) U.S. Customs and Border Protection (CBP) takes steps to ensure the safety of its facilities and personnel from natural disasters, threats of violence, and other harmful events and activities. For aiding these efforts, CBP personnel monitor publicly available social media to provide situational awareness and to monitor potential threats or dangers to CBP personnel and facility operators. CBP may collect publicly available information posted on social media sites to create reports and disseminate information related to personnel and facility safety. CBP conducted a Privacy Impact Assessment (PIA) because, as part of this initiative, CBP may incidentally collect, maintain, and disseminate personally identifiable information (PII) over the course of these activities.

Social Media Surveillance’s impact on Privacy

Social Media Surveillance

The Privacy Impact Assessment (PIA) states that CBP searches public social media posts to bolster the agency’s “situational awareness”—which includes identifying “natural disasters, threats of violence, and other harmful events and activities” that may threaten the safety of CBP personnel or facilities, including ports of entry. The PIA aims to inform the public of privacy and related free speech risks associated with CBP’s collection of personally identifiable information (PII) when monitoring social media. CBP claims it only collects PII associated with social media—including a person’s name, social media username, address or approximate location, and publicly available phone number, email address, or other contact information—when “there is an imminent threat of loss of life, serious bodily harm, or credible threats to facilities or systems.”

Chilling Effect on Free Speech
CBP’s social media surveillance poses a risk to the free expression rights of social media users. The PIA claims that CBP is only monitoring public social media posts, and thus individuals retain the right and ability to refrain from making information public or to remove previously posted information from their respective social media accounts. While social media users retain control of their privacy settings, CBP’s policy chills free speech by causing people to self-censor including not expressing their public opinions on the Internet for fear that CBP could collect their PII for discussing a topic of interest to CBP. Additionally, people running anonymous social media accounts might be afraid that PII collected could lead to their true identities being unmasked. This chilling effect is made worse by the fact that CBP does not notify users when their PII is collected. CBP also may share information with other law enforcement agencies, which could result in immigration consequences or being added to a government watchlist.

CBP’s Practices Don’t Mitigate Risks to Free Speech
The PIA claims that any negative impacts on free speech of social media surveillance are mitigated by both CBP policy and the Privacy Act’s prohibition on maintaining records of First Amendment activity. Yet, these supposed safeguards ultimately provide little protection.

Social Network Analysis

Collecting information in emergency situations and to ensure public safety undoubtedly are important, but CBP collects vast amounts of irrelevant information – far beyond what would be required for emergency awareness – by amassing all social media posts that include matches to designated keywords. Additionally, CBP agents may use “situational awareness” information for “link analysis,” that is, identifying possible associations among data points, people, groups, events, and investigations. While that kind of analysis could be useful for uncovering criminal networks, in the hands of an agency that categorizes protests and immigration advocacy as dangerous, it may be used to track activist groups and political protesters.

Conclusion

Some argue that society must “balance” freedom and safety, and that in order to better protect ourselves from those who would do us harm, we have to give up some of our liberties. This might be a false choice in many areas. Especially in the world of data analysis, liberty does not have to be sacrificed to enhance security.

Freedom of speech is a critical stitch in the fabric of democracy. The public needs to know more about how agencies are gathering our data, what they’re doing with it, any policies that govern this surveillance, and the tools agencies use, including algorithmic surveillance and machine learning techniques. A single Facebook post or tweet may be all it takes to place someone on a watchlist, with effects that can range from repeated, invasive screening at airports to detention and questioning in the United States or abroad.

Our government should be fostering, not undermining our ability to maintain obscurity in our online personas for multiple reasons, including individual privacy, security, and consumer protection.

References :

1. Privacy Impact Assessment for Publicly Available Social Media Monitoring and Situational Awareness Initiative – DHS/CBP/PIA-058
www.dhs.gov/sites/default/files/publications/privacy-pia-cbp58-socialmedia-march2019.pdf
2. CBP’s new social media Surveillance : A Threat to Free Speech and Privacy

CBP’s New Social Media Surveillance: A Threat to Free Speech and Privacy


3. We’re demanding the government come clean on surveillance of social media
www.aclu.org/blog/privacy-technology/internet-privacy/were-demanding-government-come-clean-surveillance-social

 

Time flies when ISPs are having fun

Time flies when ISPs are having fun
By Anonymous | October 29, 2021

More than four years have passed since US Congress repealed FCC rules bringing essential privacy protections to ISP consumers. This is a matter affecting millions of Americans, and measures need to be taken so consumers are not left at their own peril and big corporations’ mercy while accessing the Internet.

**What Happened?**

In March 2017, as the country transitioned from Obama’s 2nd term to newly
elected President Trump, without much alarm the, US Congress repealed regulation providing citizens with privacy protections when using ISP and broadband services. The main area concerning the regulation was to inhibit ISP appetite to freely collect, aggregate and sell consumer data, including web browsing history.

The repeal was a massive victory for ISPs such as Verizon, Comcast and AT&T and a blow to consumers’ privacy rights. Not only was the “wild west” privacy status quo maintained, but it also impeded the FCC from trying to submit any similar regulations to congress (!) in the future.

The main argument for repealing this regulation was the FTC traditionally being the agency regulating corporate/business privacy affairs. Also by regulating ISPs, it was argued the FCC would put them at disadvantage when compared to FTC regulated web services such as Google, Apple, Yahoo and such. Never mind the ISP business model is based on charging for access and bandwidth, not monetization via data brokerage or advertising services. And never mind FCC newly appointed chair – Ajit Pai – who recommended for voting against its own regulatory agency, was a former lawyer for Verizon.[1]

So four years have passed and the FTC has not issued, nor it is expected to issue any robust privacy regulatory frameworks on ISP privacy. Consumers are left into privacy limbo and states scrambling to pass related laws [2]. How bad is it, and can can be done?

**What can ISPs see **

The Internet – a network of networks – is an open architecture of technologies
and services, where information flows thru its participant nodes in little
virtual envelopes called “packets”\*. Every information-containing packet
passing thru any of the network’s edges (known as routers), can be inspected and have its source address, destination address and information content (known as payload) known.

Since the ISP is your first node entering the Internet (also known as default
gateway), this node presents a great opportunity to collect data about everything sent or received by households. This complete visibility risk is only mitigated by the usage of encryption, which prevents any nodes (except the sender and receiver) from seeing packets’ contents. As long as encryption is being used (think of HTTPS, for example), payload is not visible to ISPs.

The good news is that encryption is becoming more pervasive across all internet domains. As of early 2021, 90% of internet traffic is encrypted, and the trend is still upward.

But even with encryption present ISPs can collect a lot of information. ISPs
have to route your packages after all, so they know exactly with whom you are
communicating to and from, along with how many packages are being exchanged and their timestamps. ISPs can easily deduct when one is for, example, watching Netflix movies, despite your communication to Netflix being encrypted.

In addition to the transport of information packets per se, there is another
venue ISPs use to collect data: Domain Name Services (DNS). Every time one needs to go to a domain (say by visiting URL [www.nyt.com](http://www.nyt.com)), the translation of that domain to routable IP addresses is visible to the ISP, either by it providing the DNS service (which usually is a default setting), or examining DNS traffic (TCP port 53). ISPs can easily collect important web browsing usage in this fashion.

Beyond what is known to be used by ISPs to collect usage data, some technologies could also be used. ISPs could use technics such as sophisticated traffic fingerprinting [3] and in extreme cases even deep packet inspection, or other some nefarious techniques such as Verizon’s infamous X-UIDH’s [4]. Fingerprinting is how for example, ISPs were supposed to detect movies being shared illegally via torrent streams, a failed imposition by the Record Industry Association of America (RIAA) [5]. While it is speculative that ISPs could be resorting to such technologies, it is important to notice that abuses by ISPs occurred in the past, so without specific regulations, the potential danger remains.

**So what can you do?**

Since our legislators failed to protect us, ‘some do-it-yourself work is
needed’. And some of these actions requite a good level of caution.

Opt-in was one of the most important FCC provisions repealed in 2017, so an
opt-out action from the consumer is needed:

Another measure is to configure your home router (or each individual device) so it no longer uses the ISP as the DNS server, and make DNS traffic encrypted. Here one needs to be careful selecting a DNS provider, otherwise you are at the mercy of the same privacy risks. Make sure you select a DNS service with good privacy. For example CloudFlare DNS (server “1.1.1.1”) privacy can be found here: developers.cloudflare.com/1.1.1.1/privacy/public-dns-resolver

Setting up private DNS on Android device. Credits: cloudflare

For a complete “cloak” of your traffic, making it virtually invisible to the ISP
one can use a VPN services. These services will make internet traffic extremely difficult for your ISP to analyze. Except for volumetrics, the ISP will not have much information about your traffic. The drawback is that a VPN service provider in turn can see all your traffic, just like the ISP. So one has to be EXTREMELY diligent selecting this type of services. Some of these providers are incorporated abroad in countries with lax regulations, with varying degrees of privacy assurance. For example, vendor NordVPN is incorporated and regulated in Panama, while “ExpressVPN” has its privacy independently audited by renowned company PwC.

Last but most importantly, it is important to contact your representative and
voice your concern about the current state ISP privacy. At the current state of
affairs the FCC has its arms tied by congress, and the FTC has done very little
to protect consumers privacy. As mid-terms elections approach, this is a good
time to make your voice be heard. Your representative along ways of contact can be found here: www.house.gov/representatives/find-your-representative

References:

[1] <www.reuters.com/article/us-usa-internet-trump-idUSKBN1752PR&gt;

[2] <www.ncsl.org/research/telecommunications-and-information-technology/2019-privacy-legislation-related-to-internet-service-providers.aspx&gt;

[3] <www.ndss-symposium.org/wp-content/uploads/2017/09/website-fingerprinting-internet-scale.pdf&gt;

[4] www.eff.org/deeplinks/2014/11/verizon-x-uidh

[5] www.pcworld.com/article/516230/article-4652.html

The crumbs we leave… 

The crumbs we leave…
By Khakali Olenja  | October 8, 2021

Background 

In 1994 an engineer at Netscape by the name of Lou Montulli created the webs first cookie. Cookies are text files that reside on user’s computer and store information from the websites users visits. Montulli noticed that the internet lacked a mechanism to facilitate short-term memory storage. This meant that users who added items to their cart would not see that same item if they selected another tab or if users logged into their email account and refreshed the page the users would have to log in again. Cookies in their purest form were designed to enhance user experience, and without them we would not know the internet as we do today. 

How Cookies are used today 

While it is true that cookies are an integral part of the web experience, it is also true that their initial intent has been repurposed over time. Brands want to reach individuals with the highest probability of converting to customers. To target these customers, internet companies have built multibillion dollar business models using cookies to connect advertisers and customers. 

According to a report published by Statista Research Department, digital advertising spent worldwide amounted to approximately $378 billion dollars and is estimated to reach more than $645 billion dollars by 2024. 

When users access a website a first-party cookie is created. First-party cookies can store attributes (e.g., Location, Cart, Time Spent, Username, Password, etc.). Brands will use these first-party cookies created to continue to share ads from their site to other websites using platforms and publishers. Platforms and publishers are companies (e.g. Youtube, Facebook, Snapchat, Google etc.) that have audiences of people who they can connect brands with via ads. In between, there are also middlemen who are dedicated to ensuring that brands ads are reaching the right people, companies like Facebook and Google serve both roles because of their scale. 

As a result of the amount of money that is available to be generated from ads, platforms, publishers, and middlemen are all incentivized to collaborate with one another, which means more cookies are being generated than just the site a user is on. These additional cookies are called, third-party cookies. With third-party cookies, brands can go to Facebook or Google and request their ads be shared with users who visited the site a month ago. 

The Impact on Privacy 

While some individuals might think ads are nothing more than a nuisance at worst and a welcomed convenience at best, the enablement of third-party cookies allows companies to circumvent consent and allow for surveillance at scale without user’s privy to this. Some technology companies have rolled out features that will allow users to block third-party cookies which would prevent companies (e.g., Google, Apple, etc.) from identifying that a user on site A is the same user on site B. The problem is that companies like Google and Facebook are incentivized to find legal loopholes in existing policies. Facebook and Google can provide websites with a piece of code that looks like a first-party cookie data but sends all the data to a company anyway (e.g., Facebook Pixel). 

Regulation has historically had trouble keeping pace with business, and the technology sector more broadly. While it is my belief that technology companies are generally well-intentioned, it is abundantly clear that the capital incentives make it materially difficult for technologies companies to self-regulate. 

The Federal Trade Commission should enact legislation that provides guardrails on how customer data is being obtained and utilized beyond the terms and conditions. Open-source projects like Cookiedatabase.org – a project to bring more transparency to the world of online tracking and data collection should be referenced to help draft legislation. 

References: 

www.digitaltrends.com/computing/history-of-cookies-and-effect-on-privacy/ 

www.vox.com/open-sourced/2020/2/3/21116801/ads-internet-sites-cookies 

www.statista.com/statistics/237974/online-advertising-spending-worldwide/ 

cookiedatabase.org/ 

Unreadable Terms and Conditions

Unreadable Terms and Conditions
By Anonymous | October 8, 2021

Image 1. Visualization shows the time for an average user to read the terms and
conditions from various platforms (LePan, 2021).

Like many others, I visit at least 10 different websites on any given day, which usually means that I agree with the “terms and conditions” and accept “cookies” on those websites without even glancing at it. Just like most of the people, I don’t have the time to read it and also naively I think companies won’t use my data in a harmful way. However, in reality, the websites I give permission to might be sharing my information to insurance companies, which can be used by insurance companies to determine my health risk and increase my monthly premiums (Vedantam, 2016).

Truth to be told is nearly all of the “terms and conditions” and privacy policies we non-voluntarily agree on are filled with tons of legal jargon, and it is extremely difficult for the general public to understand. As it is mentioned in the journal ”The Duty to Read the Unreadable”, the majority of the people do not understand the terms and conditions due to the legal language used, and this is done on purpose by the businesses (Benoliel, 2019). In addition to the language used being almost incomprehensible to understand, the length of the terms and conditions is extremely long, where the word count can even go over 10,000 words. Since the terms and conditions are excessively long and challenging to understand, according to a study, only 1 percent of the people read them (Sandle, 2020). Although the companies are legally required to put “terms and conditions” on their websites, they are not required to simplify the language used.

According to the Belmont Principles, this violates the “respect for persons” principle, as consumers have no choice but to involuntarily click the “agree” button for terms and conditions, and for cookie policies (OHRP, 2021). They are essentially agreeing to the terms without giving informed consent. You could argue that people can choose to disagree with the terms and conditions, however, in that case, they will not be granted access to the platform or service they are trying to utilize. This can be extremely harmful to peoples’ privacy and it is a form of intrusion, as companies do everything they can to make the terms and conditions lengthy and incomprehensible. So, at the end of the day, users do not have much of a choice but to involuntarily “agree” to the terms.
As people value their data privacy more each day, there are now services like “Terms of Service Didn’t Read” that summarize the important information in bullet points for the users. Although the terms and conditions should be written in simpler terms, unless there is enforcement by legal authorities, I do not see companies making their language simpler for an average user. So, services like tosdr.org will only gain more importance and become essential as far as helping the consumer to understand the main points that are mentioned in the terms. However, this is definitely not a permanent solution as it still requires a lot of effort and time from users to use services like it.

Image 2. It displays that people actually do not know what they “agree” on when they click “accept” on terms and conditions.

REFERENCES:
* Benoliel, U., Becher, S. I. (2019). The Duty to Read the Unreadable. SSRN Electronic Journal. doi.org/10.2139/ssrn.3313837.
* Cakebread, C. (2017, November 15). You’re not alone, no one reads terms of service agreements. Business Insider. www.businessinsider.com/deloitte-study-91-percent-agree-terms-of-service-without-reading-2017-11.
* Frontpage — terms of service; didn’t read. (n.d.). Retrieved October 8, 2021, from tosdr.org/.
* LePan, N. (2021, January 25). Visualizing the length of the fine print, for 14 popular apps. Visual Capitalist. Retrieved October 8, 2021, from www.visualcapitalist.com/terms-of-service-visualizing-the-length-of-internet-agreements/.
* Most online ‘terms of service’ are incomprehensible to adults, study finds. VICE. (n.d.). Retrieved October 8, 2021, from www.vice.com/en/article/xwbg7j/online-contract-terms-of-service-are-incomprehensible-to-adults-study-finds.
* Office for Human Research Protections (OHRP). (2021, June 16). Read the Belmont Report. HHS.gov. Retrieved October 8, 2021, from www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html#xrespect.
* Sandle, B. D. T. (2020, January 29). Report finds only 1 percent reads ‘terms & conditions’. Digital Journal. Retrieved October 8, 2021, from www.digitaljournal.com/business/report-finds-only-1-percent-reads-terms-conditions/article/566127.
* Vedantam, S. (2016, August 23). Do you read terms of service contracts? not many do, research shows. NPR. Retrieved October 8, 2021, from www.npr.org/2016/08/23/491024846/do-you-read-terms-of-service-contracts-not-many-do-research-shows.

Risk Governance as a Path Towards Accountability in Machine Learning

Risk  Governance as a Path Towards Accountability in Machine Learning
By Anonymous | October 8, 2021

Over the past 5 years there has been a growing conversation in the public sphere about the impact of machine learning (ML) – systems that learn from historical examples rather than being hard-coded with rules – on society and individuals. Specifically, much of the coverage has focused on issues of bias in these systems – the propensity for social media feeds, news feeds, facial recognition and recommendations systems (like those that power YouTube and TikTok) to disproportionately harm historically marginalized or protected groups. From [categorizing African Americans as “gorillas”](https://www.theverge.com/2015/7/1/8880363/google-apologizes-photos-app-tags-two-black-people-gorillas), [denying them bail at higher rates](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) than comparable white offenders and [demonetizing LGBTQ+ content on YouTube](https://www.vox.com/culture/2019/10/10/20893258/youtube-lgbtq-censorship-demonetization-nerd-city-algorithm-report) ostensibly based on benign word choices in video descriptions. In the US, concern has also grown around the use of these systems by social media sites to [spread misinformation and radicalizing content](https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-misinformation-public-60-minutes-2021-10-03/) on their platforms, and the [safety of self-driving cars](https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html) continues to be of concern.

Google Search trend for 'algorithmic bias' from 2000 to 2020
Google Search trend for ‘algorithmic bias’ from 2000 to 2021.

Along with this swell of public awareness has emerged a growing chorus of voices (such as [Joy Buolamwini](https://www.media.mit.edu/people/joyab/overview/), [Sandra Wachter](https://www.oii.ox.ac.uk/people/sandra-wachter/) and [Margaret Mitchell](http://m-mitchell.com/)) advocating for fairness, transparency and accountability in the use of machine learning. Corporations appear to be starting movement in this direction as well, though not without [false starts](https://www.bloomberg.com/news/articles/2021-02-18/google-to-reorganize-ai-teams-in-wake-of-researcher-s-departure), controversy and a lack of clarity on how to operationalize their often lofty, well-publicized AI principles.

From one corner of these conversations an interesting thought has begun to emerge: that these problems are [neither new, nor novel to ML](https://towardsdatascience.com/the-present-and-future-of-ai-regulation-afb889a562b7). And, in fact, institutions already have a well-honed tool to help them navigate this space in the form of organizational risk governance practices. Risk governance encompasses the “…institutions, rules conventions, processes and mechanisms by which decisions about risks are taken and implemented…” ([Wikipedia, 2021](https://en.wikipedia.org/wiki/Risk_governance)) and contemplates broadly all types of risk, including financial, environmental, legal and societal concerns. Practically speaking, these are often organizations within institutions whose goal it is to catalogue and prioritize risk (both to the company and that which the company poses to the wider world), while working with the business to ensure they are mitigated, monitored and/or managed appropriately.

A hand stopping falling dominos
Image Copyright

It stands to reason then that this mechanism may also be leveraged to consider and actively manage the risks associated with deploying machine learning systems within an organization, helping to [close the current ML accountability gap](https://dl.acm.org/doi/pdf/10.1145/3351095.3372873). A goal which might seem more within reach when we consider that the broader risk management ecosystem (of which risk governance forms a foundational part) also includes standards (government regulations or principles-based compliance frameworks), corporate compliance teams that work directly with the business, and internal and external auditors that verify sound risk management practices for stakeholders as diverse as customers, partners, users, governments and corporate boards.

This also presents an opportunity for legacy risk management service providers, such as [PwC](https://www.pwc.com/gx/en/issues/data-and-analytics/artificial-intelligence/what-is-responsible-ai.html), as well as ML-focused risk management startups like [Monitaur](https://monitaur.ai/) and [Parity](https://www.getparity.ai/) to bring innovation and expertise into institutional risk management practices. As this ecosystem continues to evolve alongside data science, research and public policy, risk governance stands to help operationalize and make real organizational principles, and hopefully lead us into a new era of accountability in machine learning.

A New Development in Autoimmune Disease Research

A New Development in Autoimmune Disease Research
By Anonymous | October 8, 2021

Autoimmune Diseases at a Glance | Source: www.niehs.nih.gov/health/topics/conditions/autoimmune/index.cfm%5B/caption%5D

Autoimmune diseases are a class of diseases that have perplexed researchers and medical experts for decades. Autoimmune diseases affect over 15 million people in the United States and are prevalent among women between 20-40 years old [1]. With over 150 diseases and 40 subtypes that are characterized by a wide range of symptoms, it is often difficult to recognize that a patient has an autoimmune disease or diagnose the patient with a specific autoimmune disease [1]. In November 2020, the Autoimmune Registry, Inc. (ARI) released the first comprehensive list of autoimmune diseases based on data they have been collecting through a voluntary registry where individuals with autoimmune diseases can enter the details about their disease diagnosis. As someone who was recently diagnosed with an autoimmune disease after years of unexplained symptoms and shoulder shrugs from doctors who were unfamiliar with the markers of an autoimmune disease, I was eager to learn more about this registry and how I could use it for my own education. Beyond that, as a data scientist, I was particularly interested in the methods in which the ARI protects user data and how that data is being utilized in research studies.

What is an Autoimmune Disease?

What causes an Autoimmune Disease? | Source: visual.ly/community/Infographics/health/understanding-autoimmune-disease%5B/caption%5D

An immune system that functions normally has the ability to recognize and fight off viruses, bacteria, and other foreign substances that could potentially result in disease or illness. An autoimmune disease is characterized by an overactive immune system that mistakes healthy tissues and cells within the body for harmful substances and creates antibodies to attack them [8]. Autoimmune diseases have a variety of possible causes, the most common of which are: genetics, environmental factors, infectious disease, and lifestyle choices. Individuals with a family history of autoimmune disease are at a greater risk for developing that disease than the general population. Autoimmune diseases can also develop when an individual’s immune system is compromised during an illness as the result of a bacterial or viral infection. Environmental factors such as exposure to harmful chemicals or toxins, lack of exposure to sunlight, and vitamin D deficiency have been linked to the development of autoimmune diseases [8]. Lifestyle choices like smoking, an unhealthy diet, and obesity have been shown to put an individual at a much higher risk of developing an autoimmune disease in their lifetime [8].

There are many different autoimmune diseases and many of them have overlapping symptoms. The most common symptoms among autoimmune disease are fatigue, joint pain, weight loss or weight gain, dizziness, and digestive issues [8]. The wide variety of symptoms that often overlap with other autoimmune diseases or conditions make diagnosing an autoimmune disease extremely difficult. Among the most commonly recognized and diagnosed autoimmune diseases are Type 1 Diabetes, Rheumatoid Arthritis, Multiple Sclerosis, Celiac Disease, and Lupus [5]. For most autoimmune diseases, there is no cure and treatment is focused on mitigating symptoms and preventing disease progression.

Privacy Policy for the Autoimmune Registry

Autoimmune Registry Rules of Participation |
Source: survey.autoimmuneregistry.org[/caption%5D

Before an individual can enroll in the registry on the ARI website, they have to register as a user and agree to the rules for participation for the registry. Shown in the visual above, the rules give the individual details regarding how their data will be stored and used. The registry’s primary goal is to find participants for research studies and clinical trials for treatments for autoimmune diseases such as lupus.

The data entered into the registry is categorized into Personally Identifiable Data (PID) and Non-Identifiable Data (NID). The rules don’t specify exactly what data constitutes PID versus NID but based on the subsequent information provided, it can be inferred that PID refers to name, mailing address, phone number, email address, etc. NID is information voluntarily given to the registry regarding the specifics of the user’s autoimmune disease and symptoms. The NID is anonymized and entered into a database that researchers can query to find potential study or clinical trial participants.

If a user fits the criteria for a research study, the ARI will contact the user on behalf of the researcher with information about the study. The ARI emphasizes that if a user receives an email regarding a study that they “are telling **you** about the **study**,” and “**have NOT told the people running the study about you**,” [2]. Participation is completely voluntary and if a user decides they want to be part of the research, they need to contact the researcher directly. It is further stated that the ARI will never give PID out to a researcher or organization unless they receive explicit written permission from the user.

Having said that, the ARI explains that user data is not protected under the Health Insurance Portability and Accountability Act (HIPAA) since the ARI is not considered a protected entity under HIPAA. The ARI uses HIPAA and other patient privacy laws as the framework for their privacy policy and adhere to those guidelines as much as possible when handling user PID and health related data. By consenting to the privacy policy of the registry, the user is acknowledging that their data may not have the same protections as health data stored in their medical records or other locations.

A user has the ability to modify their PID within the registry but the privacy policy states that the ARI will retain the old PID in backup files and will not destroy those files even if a user requests that their information be permanently erased. It is not clear why the ARI maintains the old PID on file but regardless of the reason, they assure users that their backup files are protected to prevent unauthorized access. The privacy policy concludes by stating that the ARI bears no responsibility if user data is illegally accessed. This statement gave me pause as a potential user of the registry since illegal access of my PID and other health data would in many cases be the result of insufficient data protection or storage and I would expect that an organization would take some measure of responsibility if their system of protection was compromised.

Researcher Access to Registry

For a researcher to access the registry, they must submit a formal request to the ARI. This request includes a research application that requires the researcher to submit professional references and evidence that the research they have previously conducted had a direct impact on patients [4]. The researcher must also complete online training concerning the protection of human subjects and submit a research proposal to the ARI. This proposal will justify the researcher’s need for access to the registry. As a potential user of the registry, I was glad to read that the ARI is selective with who they allow access to the registry. I feel more comfortable knowing that only legitimate researchers interested in studying autoimmune diseases will have the ability to view data within the registry and that any contact I would receive about study participation would be from a reputable source.

User Registration for the Registry

After reading through the rules of participation for the registry, I made the decision to move forward with the registration process. Partly because I was curious to see what questions would be asked and you have to create an account to see said questions but also because I want to learn more about my own health and treatments that are in development or available.

The first section of questions asked for basic demographic information: address, gender, height, and weight. After those questions were complete, a list of autoimmune diseases was displayed and I was instructed to select which autoimmune disease or diseases I had been diagnosed with. Following my selection, the next set of questions asked about when I was diagnosed with the autoimmune disease and whether I would be willing to provide hair, saliva, or blood samples to the registry for research use. This question caught me off guard because there was no mention of collection or storage of biological data in the privacy policy. Based on the language of the privacy policy, I assumed that the only purpose the registry served was as a means of compiling aggregated data of autoimmune disease demographics and allowing researchers to recruit participants for studies and trials. There was no mention of the ARI independently collecting biological data for any purpose within the privacy policy or anywhere on the website that I could find so I declined to provide those samples until I can obtain more information on their storage and use.

Finally, the registration asked me if I would be interested in receiving emails from the ARI regarding developing research on my autoimmune disease and potential opportunities for research participation. I was also asked if I would be interested in sharing my story on social media, another surprising question considering there was no mention of social media usage within the privacy policy. I declined to participate in social media since adequate information regarding what that participation would entail was not provided.

Final Thoughts

Treatments in Development |
Source: www.statnews.com/sponsor/2017/03/17/new-innovative-medicines-offer-hope-autoimmune-disease-patients-2/%5B/caption%5D

For many people searching for answers or treatments for their autoimmune disease, the Autoimmune Registry is a blessing. The research studies enabled by the registry through the connection of researchers to participants have the potential to result in the discovery of new autoimmune diseases and treatments for existing ones. The information found may also give both patients and medical professionals greater insights into treatments and prognosis associated with different autoimmune diseases. As a data scientist, I was less than thrilled at the privacy policy provided by the registry. The ARI would benefit from expanding their privacy policy to include information about the collection and storage of biological data as well as potential social media participation for registered users. But as someone who would personally benefit from these discoveries, I am thankful that a registry like this exists and am willing to participate in future studies to find treatments for myself and others suffering from autoimmune diseases.

References
[1] Autoimmune Registry, Inc. (2020, November 18). The Autoimmune Registry releases first complete list of autoimmune diseases with prevalence statistics, disease subtypes, and disease profiles. _Cision US Inc._ Retrieved October 6, 2021 from www.prnewswire.com/news-releases/the-autoimmune-registry-releases-first-complete-list-of-autoimmune-diseases-with-prevalence-statistics-disease-subtypes-and-disease-profiles-301176322.html.

[2] Autoimmune Registry, Inc. (n.d.). Patient privacy. _The Autoimmune Registry._ Retrieved October 7, 2021, from www.autoimmuneregistry.org/new-page-2.

[3] Autoimmune Registry, Inc. (n.d.). Participant Registration and Login. _The Autoimmune Registry._ Retrieved October 7, 2021, from survey.autoimmuneregistry.org.

[4] Autoimmune Registry, Inc. (n.d.). Become a Researcher. _The Autoimmune Registry._ Retrieved October 7, 2021, www.autoimmuneregistry.org/for-researchers

[5] Diabetes Digital Media, Ltd. Autoimmune disease refers to illness or disorder that occurs when healthy tissue (cells) get destroyed by the bodys own immune system. Diabetes. Retrieved October 8, 2021, from www.diabetes.co.uk/autoimmune-diseases.html.

[6] NewLifeOutlook. (2021, April 15). Understanding autoimmune disease. _Visual.ly._ Retrieved October 8, 2021, from visual.ly/community/Infographics/health/understanding-autoimmune-disease.

[7] PhRMA. (2017, March 17). New and innovative medicines offer hope to autoimmune disease patients. _STAT._ Retrieved October 8, 2021, from www.statnews.com/sponsor/2017/03/17/new-innovative-medicines-offer-hope-autoimmune-disease-patients-2/.

[8] Shomon, M. (2021, August 2). What are autoimmune diseases? _Verywell Health._ Retrieved October 8, 2021, from www.verywellhealth.com/autoimmune-diseases-overview-3232654.

[9] U.S. Department of Health and Human Services. (2021, July 12). Autoimmune diseases. _National Institute of Environmental Health Sciences._ Retrieved October 8, 2021, from www.niehs.nih.gov/health/topics/conditions/autoimmune/index.cfm.

The Facebook Whistleblower and the Moral Dilemma

The Facebook Whistleblower and the Moral Dilemma
By Anonymous | October 8, 2021

In a span of three days, Facebook broke the internet literally and figuratively. On October 4, 2021, three of the most popular platforms globally – Facebook, Instagram, and WhatsApp – were offline for multiple hours. A 60 Minutes episode aired the night before, interviewing data scientist and former Facebook product manager France Haugen. She claimed that Facebook and its products harm children and stoke division with hate, violence, and misinformation. She then testified to Congress on October 5, 2021, mainly blaming Facebook’s algorithm and platform design for these issues.

Who is Frances Haugen and What Exactly Does She Claim?

Now famously known as the “Facebook Whistleblower,” Frances Haugen was hired in 2019 to work as a product manager for the Civic Integrity team, a team to tackle misinformation and hate speech. When this team was dissolved a month after the 2020 U.S. Election, she started to see the company’s harmful effects and immoral compass. Before leaving Facebook in May 2021, she retrieved thousands of internal research documents that she used in her claims in the 60 Minutes interview and congressional testimony. With her evidence, Haugen claimed the platform’s algorithms and engagement-based ranking system harm societies worldwide, and that leadership knew about this but did not act on it. In addition, she provided research stating that the algorithms majorly impact children and teens. For example, children could start looking for healthy recipes on Instagram and end up on pro-anorexia content, causing them to feel more depressed. Other research suggests that Facebook’s algorithms have led European countries to adopt more extreme policymaking and to cause ethnic violence around the world, like Myanmar’s military using the platform to launch a genocide campaign.

How do Facebook’s Algorithms and Ranking System Work?

Facebook’s machine learning algorithms and engagement-based ranking system aim to have personalized content by using information such as clicking on advertisements and liking/sharing posts. The algorithm takes these data points to predict what posts and advertisements users might also be interested in. But when platforms blend content personalization and algorithmic amplification, “they create uncontrollable, attention-sucking beasts,” leading to perpetuating biases and affecting societies in ways barely understood by their creators. But in this specific case, Facebook’s leadership knew of the harmful effects and did not act on making the platform a safer place for their financial gain. The algorithm rewards posts that entice the most extreme emotions (often anger, rage, or fear) because it is designed to keep users on the platform for as long as possible, no matter how it makes them feel or what it makes them think. The longer a user is on the platform, the more likely they will click on ads, leading to more revenue for the company.

What Protects Facebook from Legal Action?

Section 230 of the U.S. Communications Decency Act, passed in 1996, prevents online platforms from being responsible for any third-party content being shared on the platforms. Haugen’s proposal to Congress is reforming section 230 around algorithmic ranking, so online platforms like Facebook will be held responsible for their decisions and actions around personalized algorithmic amplification.

The Moral Dilemma: What Would You Do?

A moral dilemma is a “conflict situation in which the choice one makes causes moral harm, which cannot be easily repaired if at all.” There is a long history of companies choosing profit over safety (I tried to narrow down some significant examples, but there are too many to list). With more and more companies using data science for decision-making, data scientists often get caught in the middle of a moral dilemma: doing what they’re told by leadership, possibly knowing the harm that will come from these decisions from leadership. It is hard to predict what someone would do if they were in Haugen’s position. We generally aim to do the right thing, but it becomes a more complicated question when you jeopardize your source of income that provides for yourself and your family.

We as data scientists may face dilemmas like Frances did, where the work we do and the companies we work for might not have the best moral compass. Facebook has approximately 2.9 billion monthly active users – 60 percent of all internet-connected people on the planet – and Frances Haugen spoke up about the unethical practices taking place on the platform. Not many people could have done what she did, and I applaud her for standing up to one of the largest companies and platforms in the world.

References:

www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/

time.com/6104157/facebook-testimony-teens-algorithm/

embassy.science/wiki/Theme:17d406f9-0b0f-4325-aa2d-2fe186d5ff34

www.nytimes.com/2021/10/06/opinion/facebook-whistleblower-section-230.html

 

It is Time to Revisit HIV Public Health Practices

It is Time to Revisit HIV Public Health Practices
By Jackie Nichols | October 7, 2021

For over 15 years, I’ve been working with organizations whose mission is to end AIDS.  My passion stems from growing up in the 1980’s when the deadly disease became its most prominent and fear gripped us all. Stories that I had been reading about or seeing on the news became a part of my life.  Several of my friends had contracted AIDS and sadly lost their lives to the disease.   As we learned more about the disease, fear subsided, and I fell trap to naively believing that AIDS was a thing of the past.  It wasn’t until years later that a friend asked for a donation for the 2006 NY AIDS walk that I realized my naivety. I had managed to push the uncomfortable topic of HIV/AIDS out of my thoughts since the disease was no longer as prominent in my day-to-day life.  I was also shocked that AIDS hadn’t been defeated yet and that there was so much more work to do.  Work that centered around public health practices.

Public Health

The CDC estimates there are as many as 1.2 million Americans infected with HIV with approximately thirteen percent of them unaware (U.S Statistics, 2021).  In Los Angeles alone, it’s estimated that a quarter of all people diagnosed with AIDS during 1990 – 1995 only became aware of their infection when they exhibited advanced symptoms and received care at a hospital or clinic (Burr, 1997). This would mean that they would have most likely been HIV-positive for years and perhaps spreading the disease unknowingly. While it’s true that an HIV-positive patient will require a lifetime of costly treatment, estimated at $4,500 each month per patient (How Much Does HIV Treatment Cost?, 2020), it’s the CDC’s belief that even just one notification out of eighty pays for itself by preventing any new HIV infections (Burr, 1997). With HIV being treatable, it seems like we should be doing more to test for the disease and notify others as early as possible.  Imagine the outrage if the US stopped routine testing for breast, ovarian or colon cancer and only treated patients upon admission to a hospital due to exhibiting advanced symptoms.

Contact Tracing

If you didn’t know what contact tracing was before the pandemic, there’s a good chance you know now. I’m sure most of us have heard public officials say that we must do our part to “flatten the curve” when referring to the COVID-19 pandemic.  Flattening the curve refers to keeping the number of reported COVID-19 cases as low as possible, accounting for the load on our health care system. Contact tracing is one process that can help flatten the curve by identifying persons who may have come in contact with an infected person (“contacts”) and subsequent collection of further information about these contacts.

Figure 1 COVID-19 Contact Tracing

The goal of contact tracing is to reduce infections in the population by tracing the contacts of infected individuals, testing the contacts for infection, and isolating or treating the infected, and then tracing their contacts. Contact tracing is just one measure against an infectious disease outbreak and often requires some of the following steps be taken in conjunction: routine testing and in most cases without explicit patient consent, reporting the names of those who test positive to local health authorities, and notification to those that were in contact with the infected person that they may been exposed to. Those being notified should receive only the information they really need to maintain the privacy and anonymity of the infected individual.

Contact tracing isn’t new. In fact, it’s been used for centuries and has been one practice in the fight against infectious disease outbreaks that include tuberculosis, diphtheria, typhoid and now COVID-19.  It’s intriguing that HIV, an epidemic responsible for more than an estimated 700,000 deaths in the US since 1981 (Cichocki, 2020), isn’t routinely tested for, and when it is it requires explicit patient consent. The names of those who do test positive for HIV are not reported to local health authorities making contact tracing impossible.

Why Do We Treat HIV so differently?

To understand why testing, contact tracing and notification for HIV/AIDS is so different from other infectious diseases, we need to look back to the 1980’s and at what drove the ignorance and fear through four very common beliefs (Burr, 1997):

  1. The disease was first called Gay-Related Immune Deficiency (GRID) by researchers leading the public to believe that AIDS was limited to homosexuals and in fact a marker of homosexuality.
  2. The stigma associated with AIDS and how the disease is transmitted would make it impossible to maintain any level of testing confidentiality.
  3. There was a limited understanding of how the disease was spread with sexual transmission believe to be the only method. It was believed that HIV is only transmitted through sex, which is a taboo subject in some cultures. With the stigma associated with people who contracted AIDS, it was felt that contact tracing would be ineffectual due to the large number of sexual partners of those infected.
  4. AIDS was so different and limited to who it affected, with no cure or treatment, it was believed that it would be pointless to report HIV infection as is done for other infections. There was an early belief that the disease would “run its course”.

The damage of these four beliefs was significant and is still felt to this day.  While many in the US have learned and believe that the disease is not a homosexual marker, testing, contact tracing along with privacy and stigma remain as challenges.

Testing for HIV

Being admitted to any hospital or ER today in the US typically involves patient blood work that is tested for various diseases (e.g., tuberculosis) as well as the patient being tested for COVID-19.  The blood work and COVID-19 tests occur without patient consent and are generally an accepted societal norm, i.e., people do not question that they will be tested when being admitted to a hospital.  Similarly, a lot of work has been undertaken to reach similar norms for yearly wellness checks for women to test for breast and ovarian cancer, and for routine checks for colon cancer in the general population.  According to Nisenbaum, this is what she refers to as contextual integrity; privacy holds when context-relative informational norms are respected; it is violated when they are breached (Nissenbaum, 2008).

Unlike other infectious diseases, testing for HIV requires explicit patient consent and is currently prohibited in every state.  Blood banks are the exception as they test and screen for HIV but do not perform notifications should a sample be found to be infected. Blood banks maintain privacy by following existing legislation and social norms but in doing so, potentially allow the spread of the disease to continue. Interestingly, AIDS infections must be reported in all fifty states, but HIV does not have the same requirement once again skirting the proven methods of tracing infectious diseases. For those states that do report HIV-positive results, all personal information is removed from the test results prior to sending to the Centers for Disease Control and Prevention (CDC) to monitor what is happening with the HIV epidemic.

Partner Notification

The CDC has various outreach efforts that they refer to as “Partner notification” which includes contact tracing. This involves patients volunteering for a test, and public-health officials locating and notifying partners of infected people of possible infection.  This process hinges on people being willing or able to share the names of their partners.  Privacy is maintained by keeping the infected patients name confidential, although that does not always guarantee anonymity.  In most states there is no legal obligation to disclose your HIV-positive status to your current or past partners (Privacy and Disclosure of HIV Status, 2018).

Figure 2 HIV Partner Notification

The goal with HIV partner notification is to minimize the spread of the disease, and to treat the disease as early as possible before reaching the late stage of AIDS. AIDS is stage 4 and typically occurs when the virus is left untreated. The earlier the virus is detected, the more manageable the disease for the individual through antiretroviral drugs.   Partner notification requires two things to be successful: people must be willing to be tested and provide the results, and people must be able to act on the information. For this to happen at a scale needed to combat HIV, the privacy of the infected individuals must be protected from unnecessary disclosure.  With the data breaches that plague our digital world, one’s privacy isn’t a guarantee making HIV contact tracing more challenging than other infectious diseases.

The Impact of Stigma on HIV Public Health Policies

Stigma is one of, if not the biggest blocker from mandating HIV and AIDS testing and reporting in advancing the public health policies around HIV.  Whenever AIDS has won, stigma, shame, distrust, discrimination and apathy was on its side (HIV Stigma and Discrimination, 2019).  HIV stigma refers to irrational or negative attitudes, behaviors, and judgments towards people living with or at risk of HIV (Standing Up to Stigma, 2020). When people are identified as HIV-positive there is a risk of being stigmatized and being discriminated against. Some of the beliefs that originated in the 1980’s still exists and in some areas in the US, thrive, e.g., HIV/AIDS is a homosexual marker. While we have made progress in gay rights and equality, being identified as a homosexual can still lead to harassment, discrimination, abuse, and violence.  People are aware of the stigma and the risk associated with being publicly identified as HIV-positive and as a result many people avoid accepted public health practices like testing and reporting when it comes to HIV. In the 1980s and 1990’s the driving force for many of the legal cases involving protecting the privacy of patients was driven by the stigma associated with AIDS and the risk of being publicly identified as HIV-positive. While section 504 of the Rehabilitation Act of 1973 and Disabilities Act of 1990 (ADA) were updated to protect the civil and workplace rights of people living with HIV and AIDS (Civil Rights, 2017), many people still feel it’s not enough given the power of societal judgement and the way harassment and abuse can manifest itself causing harm to the infected individual.  As a result, we are left with HIV being exempt from consent free testing and reporting, only requiring that AIDS be reported in all fifty states, and a disease that is allowed to go undetected and untraced potentially harming others unknowingly.

Figure 3 How Stigma Leads to Sickness

Closing Thoughts

Forty years later, it’s clear that when addressing the issue of HIV and public health practices stigma plays a significant role.  When considering the broader public health question of how you can control a disease if you decline to find out who is infected (Shilts, 1987) is at the core of the battle to end AIDS.  It’s clear that we must first defeat the HIV stigma which will allow for the implementation of better public health practices that are needed to end AIDS.

References

1980s HIV/AIDS Timeline (2017) Retrieved from www.apa.org/pi/aids/youth/eighties-timeline on September 26, 2021

Burr, C  (1997) The AIDS Exception: Privacy vs. Public Health Retrieved from The AIDS Exception: Privacy vs. Public Health – The Atlantic  on September 10, 2021

Cichocki, M (2020), How Many People Have Died of HIV? Retrieved from www.verywellhealth.com/how-many-people-have-died-of-aids-48721  on September 25, 2021

Civil Rights (2017), Laws Protect People Living with HIV and AIDS Retrieved from www.hiv.gov/hiv-basics/living-well-with-hiv/your-legal-rights/civil-rights  on September 30, 2021

HIV Stigma and Discrimination (2019) Retrieved from www.avert.org/professionals/hiv-social-issues/stigma-discrimination on October 1, 2021

How Much Does HIV Treatment Cost? (2020) Retrieved from www.webmd.com/hiv-aids/hiv-treatment-cost   on September 29, 2021

Laws Protect People Living with HIV and AIDS (2017) Retrieved from www.hiv.gov/hiv-basics/living-well-with-hiv/your-legal-rights/civil-rights on September 30, 2021

Privacy and Disclosure of HIV Status (2018) Retrieved from www.justia.com/lgbtq/hiv/privacy-disclosure   on September 30, 2021

Standing Up to Stigma (2020 ) Retrieved from www.hiv.gov/hiv-basics/overview/making-a-difference/standing-up-to-stigma  on September 28, 2021

U.S Statistics (2021) Retrieved from www.hiv.gov/hiv-basics/overview/data-and-trends/statistics   on September 20, 2021