Disparities faced by people with sickle cell disease: The case for additional research funding.

Disparities faced by people with sickle cell disease: The case for additional research funding.
By Anonymous | September 22, 2021

September is Sickle Cell Awareness Month! How aware are you?
Sickle cell disease (SCD) is a group of inherited red blood cell disorders that lead to sickle-shaped red blood cells. These red blood cells then become hard and sticky, have a decreased ability to take up oxygen and a short life span in the body. As a result, people with SCD suffer many lifelong negative health outcomes and disability, ranging from stroke (even in children), acute/chronic pain crises, mental health issues, organ failure, etc.

Sickle cell disease clinical complications. Source: Kato, G., Piel, F., Reid, C. et al. Sickle cell disease. Nat Rev Dis Primers 4, 18010 (2018). https://doi.org/10.1038/nrdp.2018.10

The issues unfortunately do not end there.

In the U.S., 90% of people living with SCD are Black/African-American and suffer significant health disparities and stigmatization. SCD patients frequently describe poor interactions with the healthcare system and personnel. SCD pain crises (which can be more intense than childbirth) require strong pain relievers such as opioids. Unfortunately, few medical facilities are trained in managing SCD pain crises or other chronic issues related to SCD. There is a serious lack of respect for persons, as people coming in for SCD pain crises are often treated as though they are drug seekers while being denied the care they need. Black people have also expressed substantial mistrust of the U.S. healthcare system, given past offenses in recent history (e.g. lack of consent given to Black participants in the Tuskegee Syphilis Study).

In fact, current estimates place those with SCD at a median life expectancy of 42–47 years (2016 estimate). Even though SCD is a rare disease in the U.S. (approximately 90,000-100,000 people), over $811.4 million in healthcare costs can be attributed to 95,600 hospital stays and readmissions for sickle cell disease (2016 estimate).

Sickle cell disease-related hospital readmission is high in the U.S. Source: AHRQ News Now, September 10, 2019, Issue #679

Why is this happening??
There is not enough research or education surrounding SCD. Sickle cell advocacy groups and networks exist, but they are underfunded or there is less interest from the public compared to other health issues. For example, Farooq et al. (2020) compared the amount of support that goes into SCD with cystic fibrosis (approximately 30,000 people in the U.S.). They found that the U.S. government-funded research was on average $812 per person affected for SCD (which predominantly affects Black people), compared to $2807 per person affected for cystic fibrosis (which predominantly affects Caucasians). What about funding from private foundations? On average $102 per person for SCD and $7690 per person for cystic fibrosis.

The result?
There is lack of research and data surrounding SCD, potential new treatments, factors that affect SCD, etc. To improve quality of care, we need to understand the complex intersectionality at play with SCD… how do available treatments vary by SCD genotype? By sexual/gender minority group? By region? Without such information, there is substantial uncertainty that no data model can make up for, and residuality occurs when people with SCD are just left out of the decisions being made. Without additional funding, a lack of training in SCD will persist, with hidden biases of health care workers negatively affecting the patient-provider experience. Even among well-educated researchers, there is the notion that SCD has been “cured” and is not a priority anymore. Unfortunately, very few people meet the criteria to undergo gene therapy (must have an exact match, a specific SCD genotype, etc.), it is extremely expensive and is a risky procedure with risk of death.

Will more funding for SCD research and advocacy really help? Well, in the same time period, given over 70% more research publications for SCD vs. cystic fibrosis, SCD had one FDA-approved drug between 2008-2018 (to serve a population of 90,000) while cystic fibrosis had four (to serve a population of 30,000).

How can we do better?
Increased advocacy for SCD as a priority and overall awareness by lawmakers, researchers, and the general population is important. Given the huge disparity in private funding between SCD and cystic fibrosis, Farooq et al. suggest that increased philanthropy through private foundations may be a way to improve on SCD advocacy and research. Consider donating to established SCD foundations, or advocate to your public service officials on what SCD is and why it’s important. Share materials such as from World Sickle Cell Day or Sickle Cell Awareness Month (September). Support a summer camp for kids with SCD, where they go have fun and be further empowered while living with SCD. Everything counts. Together, we can reduce disparities and improve health outcomes for people with SCD.

Swiping Right but for Who?: When dating meets data

Swiping Right but for Who?: When dating meets data
By Elaine Chang | September 22, 2021

Online dating is a mainstay of modern culture. A study from Stanford found that around 40% of American couples initially met online, the most common medium of matchmaking. The act of swiping itself has elevated into its own set of idioms in the English language and culture – “swipe left” signals declining a potential match, “swipe right” indicates accepting a potential match. And there are so many online dating apps to choose from, depending on who you ask and what experience you may be looking for.

There is a striking shift in the matchmaking medium as American couples increasingly meet via online dating apps / websites. We should note 1) there have been many many changes from 1995 to 2017 that enabled online dating not previously possible and 2) this study is a slice of what dating looks like (American, heterosexual, technologically able)

The pandemic has only increased the use of these handheld matchmaking services – Tinder recorded its highest number of swipes on a single day totaling to 3 billion in March 2020 while OkCupid saw a 700% increase in dates from March to May 2020. Individuals typically assume that these interactions – whether direct or indirect – are private. Its a digital space for individuals to find their potential matches, get to know each other beyond the standard profile fields.

Online dating apps often ask personal questions as part of a user’s profile and offers features such as chatting via app

And while this advent of options and opportunity has undoubtedly stirred love in the digital airwaves, recent reports highlight a growing concern of what else is being aired out in the digital airwaves in the name of love. A NYTimes investigation found some disturbing details. It found that when they tested the Grindr Android app, the app shared specific latitude and longitude coordinate information with five companies. NYTimes researchers also found that the popular app OKCupid sent a user’s ethnicity and answers to profile questions such as whether they’ve used psychedelic drugs to a firm that helps companies tailor marketing messages to users. A Nightly News segment highlighted similarly disturbing practices on dating apps Hinge and Bumble. Hinge sent information of all its users to Facebook, whether they logged in via Facebook or not. Bumble sent encrypted information to undisclosed, outside companies. All companies cited their privacy policy when followed up by reporters.

Apps, such as dating apps, are sending information to different companies. Some locations are
disclosed while others are not.

These revelations show a few worrying trends related to users and their data. First, profiling of users on their own device without explicit and informed consent. Businesses are quietly creating profiles of individuals using activity, targeting them with ads and attempting to influence their behavior – all on a user’s personal device and all relatively unbeknownst to the user themselves.
Secondly, the treatment of user data both from a lack of privacy perspective as well as a lack of respect. A recent study done by the Norwegian Consumer Council suggested that some companies treat personal and intimate information of a user, such as gender preference similar to innocuous data points about the individual such as favorite food. Finally and more broadly, as
these dating apps increasingly influence love and relationships as we know it, their profile options and app interactions are then to increasingly define culture. There is a responsibility of how to do so thoughtfully and responsibly  (e.g., gender pronouns, features in the time of corona).

Ultimately, consumers trust their device and by extension apps (not only but including dating apps). Companies may need to be compelled to change through updated rules and regulation. Until then, it may behoove us all to swipe left more often on the next potential app download.


China Passed Several Laws to Strengthen Data Protection

China Passed Several Laws to Strengthen Data Protection
By Anonymous | September 19, 2021

As one of the largest economies in the world, China produces and consumes a significant amount of data every day; however, a well-constructed infrastructure for data regulation has long been missing. In the past 3 months, China passed several data-related laws. On June 10, Data Security Law (DSL), which requires all companies in China to classify their data into several categories and governs the storage and transfer of data, was passed. Two months after, on August 20, Personal Information Protection Law was passed; the new regulation was set up for the storing, transferring, and processing of personal information, especially the unlawful acquisition and abuse of personal data.

A computer with a flag on the screen Description automatically generated with medium confidence

A Signal

The passing of these data laws not only sets the framework of data regulations in China but also signals that the Chinese government will take time and effort to reform the improper data practices. Representatives from Chinese internet companies, including Alibaba, Tencent and ByteDance, were summoned for a meeting on improving data security and were forced to perform a risk assessment on their own products. Didi, Chinese leading ride-hailing service platform, was forced to stop new user registrations during the investigation of its improper “cross-border transfer of sensitive data”.  These actions show how cybersecurity remains the focus of Beijing, especially preventing sensitive data from going abroad as well as preventing internet firms from abusing their users’ personal information.

After Beijing Takes ByteDance Board Seat, Tencent and Alibaba May Be Next — The Information

Next steps: transparent and standardized

In the next stage, when government establishes more restrictions on data and users pay more attention to their own privacy, how should Chinese companies react? Firstly, an increased level of transparency is required. In the past, Chinese government agencies do not regulate the data usage much so that a lot of companies’ data process and algorithms are kind of “black boxes”. Users, on the other hand, will realize their rights on their own data and want to know more about how their data are being used.

Moreover, these internet service providers need to set up standardized procedures in their data acquisition, processing, storage and transferring practices. On one hand, the government needs more standardized rules to protect sensitive data from being revealed. On the other hand, the users need more standardized rules to protect their own personal information from unauthorized misusage, which might jeopardize their privileges.


In this era of data, data protection is of the top priority for both government sector and personal sector. Data related regulations are relatively more founded in developed economies like the Europe and United States, while well-established data protection guidelines are lacking in other countries. The action of Chinese government not only shows its concern with data security but also signals that governments and people around the world, especially in developing countries, have realized the importance of data and the need to set up regulations to protect data and personal information. With this increasing focus on data protection, data practices all over the world will be more standardized and people’s data privileges will be respected more and more.





Time to Embrace Home Automation or Worry About Privacy?

Time to Embrace Home Automation or Worry About Privacy? 
By Anonymous | September 19, 2021

“Good night, Alexa,” as you heading to the bed, “Good night,” Alexa WHISPERED. When this function is first discovered by users, a lot are thrilled and scared since the home assistant devices seem to understand and behave more like human than expected. There’re growing number of smart home assistant prevailing in the market in the past few years, and the user amounts are increasing exponentially and leading to the prosperous of the home automation industry. As more and more people are relying on the convenience that home assistant such as Amazon Echo, Google Home, could bring to life, more concerns are brought up about the personal data and privacy of the users.

Does the home assistants always listen to us?

When home automation first came into the sight of the public, easy commands are expected and executed: maybe answering questions by retrieving information online, turning lights on and off, setting alarms, playing specific music or a genre based on the description, etc. Then, more can be connected, not limited to mobile devices, cameras, TV, Air Conditioner, Electrical cars, as long as they are manufactured as “works with” smart home assistants.

How fascinating the smart home automation works

Over years of transformation, now these home assistants fueled by artificial intelligence can observe and collect data in real-time for a specific user and make sense of a certain command under the given situation. Despite of the fact that some users are reporting that such devices are not smart enough in certain scenarios causing false alarms and silly responses, the others are surprised by what they are capable of and worried about their lives getting controlled over by these smart home assistants.

“They listen to our talking all the time,” shouts are expressed to the public the year Alexa introduced to the public. It’s rational and reasonable to have such concerns, and understandable for the public to worry about the privacy and security of their personal information.

Smart home assistants can respond to vocal command either via recording data continuously or waiting for a signature word to be activated or awakened. Either way, as using these devices and features, people are exposing their personal data collected by these home assistants to some extent, depending on the ethics of the companies. As an example of personal identification information, let alone accounts and credentials that are linked, how about voices? They can recognize the host’s voice and decide on whether to respond or execute the requests received or not. It’s already a question whether anonymous still exist in the world of Internet.

There’s no promise made by the companies or privacy and terms, that data collected are only authorized information or with consent. It’s also vague how they may use these personal data since when user signing the consent form, it’s saying “help improve our performance and functionality” while not mentioning a word on how specifically these will be performed on the data collected.

Artificial intelligence brings undeniable convenience to human lives, and the public, or users, are crucial in providing data fed to the algorithms and artificial intelligence to make the improvements and progress. However, gaining solid support from the public is fundamental in making further progresses speaking of these smart home assistant devices in the market. To achieve the optimal outcomes both to users and to the companies, more efforts should be made on the transparency on they data collection and addressing the privacy concerns. And it’s always your choice to embrace such home automation revolution or saying no!

Amazon Privacy Notice: https://www.amazon.com/gp/help/customer/display.html?nodeId=468496&tag=thewire06-20&tag=thewire06-20

South Korea’s PIPC Fines Tech Giants

South Korea’s PIPC Fines Tech Giants
By Anonymous | September 19, 2021

It is no brainer that South Korea is one of the world’s most technologically advanced and digitally connected countries with the highest average internet connection speed worldwide. With internet infrastructure that places a high priority in numerous governmental regulations, there comes a need for strong data privacy and only recently has a central administrative agency been established to govern data-related policies. Under the newly amended Personal Information Protection Act (PIPA), the Personal Information Protection Commission (PIPC) has become South Korea’s data protection authority since August 5, 2020. Since then, the PIPC has asserted various fines and corrective actions against tech giants such as Facebook, Google, and Netflix due to a major privacy audit that was conducted in 2020.

Out of all, Facebook was fined the largest penalty for privacy violations related to the collection of facial recognition data without the users’ consent. Personal images and social security numbers of over 200,000 platform users were collected for facial recognition templates, while failing to obtain consent, notify users, and submit required information when requested by PIPC. In the context of Respect for Persons, a basic ethical principle outlined in The Belmont Report, it seems that Facebook has neglected the need to provide users an opportunity to opt out of the facial recognition data collection process through proper consent and further ignored providing sufficient information to their users when migrating personal data to third parties or overseas. Prior to the collection and unauthorized transfer of personal data, there must’ve been widespread agreement of the consent process through the following domains: information, comprehension, and voluntariness. Thus, Facebook has been ordered to destroy any personal data that was associated with the facial recognition initiatives.

As for Google and Netflix, less penalties were issued for privacy violations in which Netflix failed to gain proper consent for collecting and transferring personal user’s data while Google ended up not violating the PIPA but still received legal recommendations on clarity and improvements to their personal data collection processes. Even so, Microsoft was fined insignificantly compared to Facebook for privacy violations related to leaked e-mail addresses, with 144 belonging to South Korean citizens and taking 11 days to publish a notification in Korean which was supposed to be performed within 24 hours under the PIPA. Like Facebook’s case, one of the Belmont Report principles, for Justice, seems to have been violated as the benefits and risks of the organization’s users weren’t appropriately assessed. It is also worth mentioning that users who are already marginalized in their everyday lives outside of these platforms are further burdened by the unprecedented age of market concentration that stems out of the personal data collected from these individuals. Even so, the obligations of Beneficence affect both the organization’s investigators and society at large, because they extend both to research projects and to the entire enterprise of research. In the case of using personal user data, organizations must be required to recognize the longer-term benefits and risks that may result from data collection processes and results.

The question of whether tech giants can cope with the patterns of PIPA moving forward comes into play as PIPC issued a statement that its investigations on privacy violations will continue. Even though certain companies only faced small fines, there also comes the concern of the company’s credibility with millions to billions of users providing personal data on a countless basis. With stronger data regulations coming into play, I hope to see many organizations taking a stronger stance in ensuring the autonomy, well-being, and equal distribution of their platform users.


The Unequal American Dream: Hidden Bias in Mortgage Lending AI/ML Algorithms

The Unequal American Dream: Hidden Bias in Mortgage Lending AI/ML Algorithms
By Autumn Rains | September 17, 2021

Owning a home in the United States is a cornerstone of the American Dream. Despite the economic downturn from the Covid-19 pandemic, the U.S. housing market saw double-digit growth rates in home pricing and equity appreciation in 2021. According to the Federal Housing Finance Agency, U.S. house prices grew 17.4 percent in the second quarter of 2021 versus 2020 and increased 4.9 percent from the first quarter of 2021 (U.S. House Price Index Report, 2021). Given these figures, obtaining a mortgage loan has further become vital to the home buying process for potential homeowners. With advancements in Machine Learning within financial markets, mortgage lenders have opted to introduce digital products to speed up the mortgage lending process and serve a broader, growing customer base.

Unfortunately, the ability to obtain a mortgage from lenders is not equal for all potential homeowners due to bias within the algorithms of these digital products. According to the Consumer Financial Protection Bureau (Mortgage refinance loans, 2021):

“Initial observations about the nation’s mortgage market in 2020 are welcome news, with improvements in the overall volume of home-purchase and refinance loans compared to 2019,” said CFPB Acting Director Dave Uejio. “Unfortunately, Black and Hispanic borrowers continued to have fewer loans, be more likely to be denied than non-Hispanic White and Asian borrowers, and pay higher median interest rates and total loan costs. It is clear from that data that our economic recovery from the COVID-19 pandemic won’t be robust if it remains uneven for mortgage borrowers of color.”

New Levels of Discrimination? Or Perpetuation of History?
Exploring the history of mortgage lending in the United States, discrimination based on race has been an undertone in our history. Housing programs under ‘The New Deal’ in 1933 were forms of segregation. People of color were not included in new suburban communities and instead placed into urban housing projects. The following year, the Federal Housing Administration (FHA) was established and created a policy known as ‘redlining.’ This policy furthered segregation for people of color by refusing to issue mortgages for properties in or near African-American neighborhoods. While this policy was in effect, the FHA also offered subsidies for builders who prioritized suburban development project builds, requiring that builders sold none of these homes to African-Americans (Gross, 2017).

Bias in the Algorithms
Researchers at UC Berkeley Haas School of Business discovered that black and Latino borrowers were charged higher interest rates of 7.9 bps both online and in-person in 2019 (Public Affairs & Affairs, 2018). Similarly, The Markup also explored this bias in mortgage lending and found the following about national loan rates:

Holding 17 different factors steady in a complex statistical analysis of more than two million conventional mortgage applications for home purchases, we found that lenders were 40 percent more likely to turn down Latino applicants for loans, 50 percent more likely to deny Asian/Pacific Islander applicants, and 70 percent more likely to deny Native American applicants than similar White applicants. Lenders were 80 percent more likely to reject Black applicants than similar White applicants. […] In every case, the prospective borrowers of color looked almost exactly the same on paper as the White applicants, except for their race.

Mortgage lenders approach the digital lending process similarly to traditional banks regarding risk evaluation criteria. These criteria include income, assets, credit score, current debt, and liabilities, among other factors in line with federal guidelines. The Consumer Finance Protection Bureau issued guidelines after the last recession to reduce the risk of predatory lending to consumers. (source) If a potential home buyer does not meet these criteria, they are classified as a risk. These criteria do tend to put people of color at a disadvantage. For example, credit scores are typically calculated based on individual spending and payment habits. Rental payments are typically the most significant payment individuals pay routinely, but these generally are not reported to credit bureaus by landlords. According to an article in the New York Times (Miller, 2020), more than half of Black Americans pay rent. Alanna McCargo, Vice President of housing finance policy at the Urban Institute, further elaborates within the article:

“We know the wealth gap is incredibly large between white households and households of color,” said Alanna McCargo, the vice president of housing finance policy at the Urban Institute. “If you are looking at income, assets and credit — your three drivers — you are excluding millions of potential Black, Latino and, in some cases, Asian minorities and immigrants from getting access to credit through your system. You are perpetuating the wealth gap.” […] As of 2017, the median household income among Black Americans was just over $38,000, and only 20.6 percent of Black households had a credit score above 700.”


Remedies for Bias
Potential solutions to reduce hidden bias in the mortgage lending algorithms could include widening the data criteria used for risk evaluation decisions. However, some demographic factors about an individual cannot be considered according to the law. The Fair Housing Act of 1968 states that within mortgage underwriting, lenders cannot consider sex, religion, race, or marital status as part of the evaluation. However, these may be factors by proxy through variables like timeliness of bill payments, a part of the credit score evaluation previously discussed. If Data Scientists have additional data points beyond the scope of the recommended guidelines of the Consumer Finance Protection Bureau, should these be considered? If so, do any of these extra data points include bias directly or by proxy? These considerations pose quite a dilemma for Data Scientists, digital mortgage lenders, and companies involved in credit modeling.

Another potential solution in the digital mortgage lending process could be the inclusion of a diverse team of loan officers in the final step of the risk evaluation process. Until lenders can place higher confidence in the ability of AI/ML algorithms to reduce hidden bias, loan officers should be involved to ensure fair access for all consumers. Tangentially, alternative credit scoring models that include rental history payments should be considered by Data Scientists at mortgage lenders with digital offerings. By doing so, lenders can create a more holistic picture of potential homeowners’ total spending and payment history. This would allow all U.S. residents the equal opportunity to pursue the American dream of homeownership in a time when working from home is a new reality.


Works Cited

  • Gross, T. (2017, May 3). A ‘forgotten history’ of how the U.S. government segregated America. NPR. Retrieved September 17, 2021, from https://www.npr.org/2017/05/03/526655831/a-forgotten-history-of-how-the-u-s-government-segregated-america.
  • Miller, J. (2020, September 18). Is an algorithm less racist than a loan officer? The New York Times. Retrieved September 17, 2021, from https://www.nytimes.com/2020/09/18/business/digital-mortgages.html.
  • Mortgage refinance loans drove an increase in closed-end originations in 2020, new CFPB report finds. Consumer Financial Protection Bureau. (2021, August 19). Retrieved September 17, 2021, from https://www.consumerfinance.gov/about-us/newsroom/mortgage-refinance-loans-drove-an-increase-in-closed-end-originations-in-2020-new-cfpb-report-finds/.
  • Public Affairs, U. C. B. N. 13, & Affairs, P. (2018, November 13). Mortgage algorithms perpetuate racial bias in lending, study finds. Berkeley News. Retrieved September 17, 2021, from https://news.berkeley.edu/story_jump/mortgage-algorithms-perpetuate-racial-bias-in-lending-study-finds/.
  • U.S. House Price Index Report 2021 Q2. U.S. House Price Index Report 2021 Q2 | Federal Housing Finance Agency. (2021, August 31). Retrieved September 17, 2021, from https://www.fhfa.gov/AboutUs/Reports/Pages/US-House-Price-Index-Report-2021-Q2.aspx.

Image Sources

  • Picture 1: https://www.dcpolicycenter.org/wp-content/uploads/2018/10/Location_map_of_properties_and_projects-778×1024.jpg
  • Picture 2: https://static01.nyt.com/images/2019/12/08/business/06View-illo/06View-illo-superJumbo.jpg

The Battle Between Corporations and Data Privacy

The Battle Between Corporations and Data Privacy
By Anonymous | September 17, 2021

With each user’s growing digital footprint should come an increase in liability and responsibility for companies. Unfortunately, this isn’t always the case. It’s not surprising that data rights aren’t at the top of the to-do list given that more data usually comes hand in hand with steeply increasing targeted ad revenue, conversion rates and customer insights revenue. Naturally, the question arises: where and how do we draw the line between justifiable company data usage and a data privacy breach?

Preliminary Legal Measures
Measures like General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have set a great precedent for trying to systematically answer this question, but they’re far from widely accepted. The former was enacted in the EU and it allows residents to have control over their data. Companies now have to release information about what information is collected/stored, and customers need to provide consent for data to be collected or to be used to marketing purposes. However, companies have found a loophole by just created 2 streams of data collection (one for the EU and one for countries outside of it) instead of changing data policies world-wide. The latter only impacts the state of California, and not many other states have followed suit. Seeing as state by state policies can greatly complicate compliance and data storage measures, companies have actually stepped up to influence and speed up these measures.

Big-Tech’s Influence on Legislation
Surprisingly, Big-Tech (Facebook, Amazon, Microsoft, Google, Apple, etc.) is actually on the front lines of pushing states to pass privacy laws; although it’s not so much a heroic act of benevolence as it is a crafty way to control the stringency of privacy measures put into place. In fact, Virginia’s recently passed Privacy Law was reportedly co-authored by Amazon and Microsoft, and it’s now in consideration in 14 other states using the same exact or even a weaker legal framework. These bills are strongly backed by all of Big-Tech and are quickly moving through the process due to pressure from countless company lobbyists. The biggest impact of these bills is that consumers cannot sue companies for violations of the law. Another key point is that the default setting for users is to opt into tracking unless the user combs through the settings to opt out of it. The industry is counting on the idea that if the country is flooded with these weaker state laws, they will essentially be able to disregard the harsher state laws like the CCPA. In figure below, you can see just how much companies are spending on legislation within one state:

Image: The amount of money spent on lobbying in Connecticut by Big-Tech

Good News on the Horizon
However, it’s important to note that this doesn’t mean that data privacy is a lost cause and that legislation is not effective. Indeed, there are some corporations taking privacy into their own hands and creating large scale impacts. The most scalable of which, is Apple, which released a new data privacy measure that now requires every user to knowingly opt in or out of data tracking for every single app they use. While this was met with heavy backlash from ad-revenue and user data dependent companies such as Facebook, Apple has remained firm in its decision to mandate user opt-in permission for data tracking. Their decision has resulted in less than 33% of iOS users opting in to the tracking which is a massive hit to the ad-tech industry.

Furthermore, as iOS users have been opting out of tracking, advertisers can’t bid on them so the lack of iOS users has driven up advertisement demand for Android users. As a result, Android ad prices are now about 30% higher than ad prices for iOS users, and companies are choosing to move their ads to Android powered devices. For some context, digital-ad agency Tinuiti’s Facebook clients went from year-over-year spend of 46% for Android users in May to 64% in June. The clients’ iOS spending saw a corresponding slowdown, from 42% in May to 25% in June. Despite these drawbacks, this move alone is forcing companies everywhere to change their data tracking policies because while they’re escaping state and federal privacy measures, they’re getting blocked by wide-reaching, software platform-based privacy laws.


  • https://themarkup.org/privacy/2021/04/15/big-tech-is-pushing-states-to-pass-privacy-laws-and-yes-you-should-be-suspicious
  • https://www.coreview.com/blog/alpin-gdpr-fines-list/
  • https://www.bloomberg.com/news/articles/2021-07-14/facebook-fb-advertisers-impacted-by-apple-aapl-privacy-ios-14-changes
  • https://www.facebook.com/business/help/331612538028890?id=428636648170202
  • https://www.theverge.com/2020/3/3/21153117/congress-tech-regulation-privacy-bill-coppa-ads-laws-legislators
  • https://www.macworld.com/article/344420/app-tracking-transparency-privacy-ad-tracking-iphone-ipad-how-to-change-settings.html

The intersection of public health and data privacy with vaccine passports

The intersection of public health and data privacy with vaccine passports
By Anonymous | September 17, 2021

Countries, states, and cities are implementing and enforcing vaccine passports. The use of vaccine passports is to provide individuals greater protection against the spread of COVID-19; however, the safety provided comes with concerns over data privacy and ensuring its safe protection, too. On the one hand, vaccine passports supply a universal and standardized solution for ensuring individuals are vaccinated when entering high-exposure areas, such as traveling and large indoor gatherings. With the standardization comes data privacy risks and concerns with respect to the Fair Information Practice Principles.

Return to Normalcy
Since the beginning of the pandemic, travelling and tourism declined due to legal restrictions coupled with peoples’ fear of contracting the virus during travel. Vaccine passports give individuals the relief of knowing that others around them are vaccinated, too, while businesses receive an opportunity to attract more customers. The chart on the left illustrates the dip in tourism and flight travel during the pandemic; whereas the chart on the right shows the global recognition of multiple vaccines. All to indicate that there are several vaccines that are recognized around the world for potential vaccine passports and that the travel and tourism industries would benefit from such programs.

Not only do businesses benefit from vaccine passports as it could attract more customers to return, but unemployed workers would as well. Unemployed individuals would benefit from vaccine passports as it would trigger an increase in customer activity, which, in turn, increases the need for businesses to hire more. The image below visualizes the hardest hit sectors by change in employment. The largest negative changes were seen in industries that rely on large gatherings and crowds. Thus, if vaccine passports can bring us back to normalcy faster, then businesses can recover faster and more people can be re-hired.

The European Union is rolling out a vaccine passport with its data privacy grounded in the GDPR framework, which addresses transparency – individuals should be given detailed information on how the data will be collected, used, and maintained – with the GDPR’s transparency, purpose limitation, and security principles. Through the GDPR, individuals will have a transparent understanding of the purpose for which the data will be used, and only for that purpose, and be ensured that the data will be “processed in a manner that ensures appropriate security of the personal data” (ICO 2018). However, this only applies to the individuals with the EU vaccine passport. There are other countries, Malaysia and China for example, that do not follow GDPR as the basis of its data transparency for vaccine passports. This can cause a concern for how the data could be used for other purposes post-pandemic. A vaccine passport gives transparency to businesses and governments on the vaccination status, the individuals participating should receive the same level of transparency on how their data will be stored and for what direct purposes.

Individual Participation & Purpose Specifications
Individual participation with vaccine passports comes into question as countries and governments that require vaccine passports to attend indoor dining, large indoor events, etc. force participation. If an individual wants to participate in such events, then their participation in enrolling in a vaccine passport system is required. This forced consent for an individual to provide data to be able to enjoy these activities causes an ethical dilemma of what other activities could soon require individuals to use a vaccine passport and risk their personal data privacy. In addition, the term length of vaccine passports is unknown as the pandemic continues to fluctuate, which causes issues with the purpose specifications principle – clearly stated uses of the collected data. The issue is that individuals that provide their personal information for entry into a vaccination program may not know how long their data will be kept as the use case for it could continue to be extended if never retired.

Accountability and Auditing
With the United States rejecting a federal vaccine passport, states, cities, and private entities have developed and instituted their own vaccine approval programs. The uncoordinated effort for a single, standardized program within the U.S. brings attention to accountability and auditing problems in ensuring proper training to all the people involved in the data collection, processing, and storage components. States and cities may have training programs for data collection, but private entities that are looking to rebound from a tough 2020 economic dip may not have the resources and time to train their employees and contractors on proper data privacy practices. Therefore, by the federal government not implementing a nationwide program, individuals that consent to providing their data for vaccine-proof certification may risk potential data concerns with a lack of training for people collecting and using the data.

Vaccine passports have great potential in limiting the spread of the virus by giving individuals and organizations visibility and assurance of vaccination status for large groups. However, vaccine certification programs need to give individuals transparency into the clear and specific uses of their information, provide term limits for purpose specifications, and ensure that people who will be collecting, using, and storing the data are properly trained in data privacy practices. If these concerns are addressed, then we could see more adoption of vaccine passports to combat the spread of the virus. If not, then individuals’ mistrust of data privacy will persist and returning back to normalcy may take longer than hoped.

Elevator Pitch
Vaccine passports have great potential in limiting the spread of the virus by giving individuals and organizations visibility and assurance of vaccination status for large groups. However, vaccine certification programs need to give individuals transparency into the clear and specific uses of their information, provide term limits for purpose specifications, and ensure that people who will be collecting, using, and storing the data are properly trained in data privacy practices. If these concerns are addressed, then we could see more adoption of vaccine passports to combat the spread of the virus. If not, then individuals’ mistrust of data privacy will persist and returning back to normalcy may take longer than hoped.



  • Baquet, D. (2021, April 22). The controversy over vaccination passports. The New York Times. Retrieved September 13, 2021, from https://www.nytimes.com/2021/04/22/opinion/letters/covid-vaccination-passports.html.
    BBC. (2021, July 26). Covid passports: How do they work around the world? BBC News. Retrieved September 14, 2021, from https://www.bbc.com/news/world-europe-56522408.
    Martin, G. (2021, April 28). Vaccine passports: Are they legal-or even a good idea? UC Berkeley Public Health. Retrieved September 14, 2021, from https://publichealth.berkeley.edu/covid-19/vaccine-passports-are-they-legal-or-even-a-good-idea/.
  • Schumaker, E. (2021, April 10). What to know about COVID-19 vaccine ‘passports’ and why they’re controversial. ABC News. Retrieved September 14, 2021, from https://abcnews.go.com/Health/covid-19-vaccine-passports-controversial/story?id=76928275.
  • The principles. ICO. (2018, May 25). Retrieved September 14, 2021, from https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/.
  • Turner-Lee, N., Lai, S., & Skahill, E. (2021, July 28). Vaccine passports underscore the necessity of U.S. privacy legislation. Brookings. Retrieved September 14, 2021, from https://www.brookings.edu/blog/techtank/2021/06/28/vaccine-passports-underscore-the-necessity-of-u-s-privacy-legislation/.