Privacy Computing

Privacy Computing
By Anonymous | October 29, 2021

The collection, use, and sharing of user data can enable companies to better judge users’ needs and provide better services to customers. From the perspective of contextual integrity [1], all the above are reasonable. However, studying the multi-dimensional privacy model [2] and privacy classification method [3], there are many privacy risks in the processing and sharing of user data, such as data abuse, third-party leakage, data blackmail, and so on. Due to the protection of the value of data and the protection of user privacy authorization by enterprises and institutions, data is stored in different places, and it is difficult to effectively connect and interact with each other. Traditional commercial agreements cannot effectively protect the security of data. Once the original data is out of the database, it will face the risk of completely losing control. A typical negative case is the Cambridge Gate incident on Facebook. The two parties follow the agreement: Facebook will transfer tens of millions of user data to Cambridge Analytica for academic research [4]. However, once the original data was released, it was completely out of control and used for non-academic purposes, resulting in huge fines facing Facebook. It is needed to provide a more secure solution from the technical level to ensure that the data usage rights are subdivided in the process of data circulation and collaboration.

“Privacy computing” is a new computing theory and method for protecting the entire life cycle of private information [5]. Privacy leakage, privacy protection and privacy calculation models along with the separation of the right to use the axiom system and other methods, are used to protect the information while using it. Privacy computing is essentially to solve data service problems such as data circulation and data application on the premise of protecting data privacy. The concept of privacy computing includes: “data is available but not visible, data does not move the model moves”, “data is available but invisible, data is controllable and measurable”, “not sharing data, but sharing the value of data” and so on. According to the main related technologies of privacy computing technology in the market, it can be divided into three categories: multi-party secure computing, trusted hardware, and federated learning.

Federated learning is a distributed machine learning technology and system that includes two or more participants. It allows people to perform specific algebraic operations on plaintext data to get the result that is encrypted, and the result obtained by decrypting it is the same as the result of performing the same operation on the plaintext. These participants conduct joint machine learning through a secure algorithm protocol and can jointly model and provide model reasoning and prediction services in the form of intermediate data exchange. And the model effect obtained in this way is almost the same as the effect of the traditional central machine learning model, as shown in Fig.1.

Secure multi-party computation is a technology and system that can safely calculate agreed functions without requiring participants to share their own data and without a trusted third party. Through security algorithms and protocols, participants encrypt or convert data in plain text before providing the data to other parties. No participant can access other parties’ data in plain text, thus ensuring the security of all parties’ data, as shown in Fig.2.

Trusted computing includes a security root of trust that is first created, and then a chain of trust from the hardware platform, operating system to the application system is established. On this chain of trust, the first level of certification is measured from the root, and the first level of trust is the first level. This realizes the step-by-step expansion of trust, thereby constructing a safe and trustworthy computing environment. A trusted computing system consists of a root of trust, a trusted hardware platform, a trusted operating system, and a trusted application. Its goal is to improve the security of the computing platform.

With the increasing attention in various fields, privacy computing has become a hot emerging technology and a hot track for business and capital competition. Data circulation is a key link to release the value of data, and privacy computing technology provides a solution for data circulation. The development of privacy computing has certain advantages and a broad application space. However, due to the imperfect technology development, it also faces some problems. Whether it is innovation breakthroughs realized by engineering or optimization and adaptation between software and hardware, the performance improvement of privacy computing has a long way to go.

References:
【1】 Helen Nissenbaum, “Privacy as Contextual Integrity”, Washington Law Review, Volume 79, Number 1 Symposium: Technology, Values, and the Justice System, Feb 1, 2004.
【2】 Daniel J. Solove, “A Taxonomy of Privacy”, The University of Pennsylvania Law Review, Vol. 154, No. 3, pp. 477-564, 2006. doi.org/10.2307/40041279.
【3】 Mulligan Deirdre K., Koopman Colin and Doty Nick 2016, “Privacy is an essentially contested concept: a multi-dimensional analytic for mapping privacy”, Phil. Trans. R. Soc. A.3742016011820160118 doi.org/10.1098/rsta.2016.0118
【4】 Confessore, Nicholas (April 4, 2018). “Cambridge Analytica and Facebook: The Scandal and the Fallout So Far”. The New York Times. ISSN 0362-4331. Retrieved May 8, 2020.
【5】 F. Li, H. Li, B. Niu, J. Chen,” Privacy Computing: Concept, Computing Framework, and Future Development Trends”, journal of engineering 5, 1179-1192, 2019.

Alternative credit scoring – a rocky road to credit

Alternative credit scoring – a rocky road to credit
By Teerapong Ninvoraskul | October 29, 2021

Aimee, a food truck owner in the Philippines, was able to expand her business after getting access to a loan. She opened a second business where she sells beauty products on the side. Stories like Aimee were common in the Philippines, where 85% of the formal Filipino population is outside of the formal banking system.

“Aimee makes money, she’s clearly got an entrepreneurial spirit, but previously had no way of getting a forma bank to cooperate” said Shivani Siroya, founder and CEO of Tala, a fintech company providing financial access to individuals and small businesses.

Loan providers usines alternative credit scoring like Tala is spreading fast through developing countries. In just a few years China’s Ant Financial, an affiliate of Alibaba Group, has built up an extensive scoring system, called Zhima Credit (or Sesame Credit), covering 325m people.

Alternative credit scoring could be viewed as a development in building a loan-default prediction system. Unlike the traditional credit score system which determines consumers’ possibilities of default using financial information such as payment history, alternative scoring models use their behaviors on the Internet to predict default rates.

Personal information such as email, devices used, time of the day when browsing, IP address, purchase history, etc. are collected. These data are found to be correlated with loan default rate.


Alternative credit scoring

Financial access for the unbanked
Historically, lower income is the market segment which is too costly for traditional banking to serve, given its small ticket size, expensive infrastructure investment required, and high default rates.

For this market segment, traditional credit-scorers have limited data to work with. They could use payment records for services that are provided first and paid later, such as utilities, cable TV or internet. Such proven payment data are a good guide to default risk in the absence of credit history. In most cases, this specialized score is the only possible channel to get credible scores for consumers that were un-scorable based on traditional credit data alone.

In smaller and poorer countries with no financial infrastructure, credit-scorers have even more limited financial data to work with. Utilities are registered to households, not individuals, if they are registered at all. Thanks to high penetration of pay-as-you-go mobile phones among the poor, rapidly emerging alternative lenders are able to look at payment records for mobile phones.

New breed of startups spot opportunities to bring these data-driven, algorithm-based approaches to offer services to individual and small businesses. Tala, which operates in India, Mexico, the Philippines and east Africa, says it uses over 10,000 data points collected from a customer’s smartphone to determine whether to grant a loan. It has lent more than $2.7 billion to over 6 million customers since 2014.

With inexpensive cost structure and lower loan default rates, these fintech startups achieve attractive investment returns and are able to provide cost-efficient financing to the previously unbanked and underbanked.

Lack of transparency & fairness challenges
Despite benefits in expanding financial inclusion, alternative credit scoring presents new challenges that raises issues of transparency and fairness.

First, it is harder to explain to people seeking credit than traditional scores. While consumers generally have some sense of how their financial behavior affects their traditional credit scores, it may not always be readily apparent to consumers, or even to regulators, what specific information is utilized by certain alternative credit scoring systems, how such use impacts a consumer’s ability to secure a loan or its pricing, and what behavioral changes consumers might take to improve their credit access and pricing.

Difficulty in explaining the alternative scores is further amplified by the secretive “blackbox” roles that alternative scoring systems play as competitive edges against each other in producing better default predictions for lenders.

Second, improving their own credit standing is more difficult. Traditional credit scoring is heavily influenced by a person’s own financial behavior; therefore, clearer targeted actions to improve one’s credit standing, i.e., punctual monthly mortgage payments.

However, most alternative data may not be related to a person’s own financial conduct, making it beyond consumers’ control to positively influence the scores. For example, a scoring system using your social media profile, or where you attended high school, or where you shop to determine your creditworthiness would be very difficult for you to take actions to positively influence.

Third, big data could contains potential inaccuracies and biases that might lead to discrimination against against low-income, therefore, failing to provide equitable opportunity for the underserved population.

Using some alternative data, especially data about a trait or attribute that is beyond a consumer’s control to change, even if not illegal to use, could harden barriers to economic and social mobility, particularly for those currently out of the financial mainstream, i.e., Landlords often don’t report rental payments that million people make on a regular basis, including more than half of Black Americans.

Predicting the predictors

Ultimate goal of the alternative scoring system is to predict the likelihood of timely payment, which are incorporated in the predicting factors within the FICO traditional scoring system. One would argue that alternative scoring is simply an attempt to use correlations between these non-traditional characteristics and payment history to come up with the creditworthiness prediction.

It’s arguably whether these alternative models could match the prediction power of actual financial records, and whether it is simply a transitional road to the traditional model while financial payment records are not available for the underserved population.

References:

  • www.economist.com/international/2019/07/06/a-brief-history-and-future-of-credit-scores
  • Big Data: A tool for inclusion or exclusion? Understanding the issues (FTC Report)
  • CREDIT SCORING IN THE ERA OF BIG DATA Mikella Hurley* & Julius Adebayo** 18 YALE JL & TECH. 148 (2016)
  • Is an Algorithm Less Racist Than a Loan Officer? New York Times, Sep 2020
  • What Your Email Address Says About Your Credit Worthiness, Duke University’s Fuqua School of Business, Sep 2021,
  • Data Point: Credit Invisibles, The Consumer Financial Protection Bureau Office of Research
  • On the Rise of FinTechs – Credit Scoring using Digital Footprints
  • Zest AI Comments on The Federal Guidance For Regulating AIts-on-the-federal-guidance-for-regulating-ai
  • MEMORANDUM FOR THE HEADS OF EXECUTIVE DEPARTMENTS AND AGENCIES FROM: Russell T. Vought Acting Director

The New Need to Teach Technology Ethics

The New Need to Teach Technology Ethics
By Tony Martinez| October 29, 2021

The Hippocratic oath was written in the 5th century BC with on of the first lines stating “I will use those dietary regimens which will benefit my patients according to my greatest ability and judgement, and I will do no harm or injustice to them.”1 Iterations of this ode has been adopted and updated to be used in medicine and in other industries with the main purpose of stating do no harm. For these industries the onus of the oath falls on the industry and not the patients or users. Is it time now for Technology companies to take a similar oath?

Discussion:
Like many people I use a plethora of applications and websites for things like mobile banking and my daily work or for the occasional dog video. In doing this I blindly accept terms of service, cookie policies, and even share my data such as email for more targeted advertisements. Then I took w231 “Behind the Data: Human and Values” at the University of California Berkeley and was tasked to review these terms and understand them. It was here, where as a master level student, I was frustrated and unable to grasp some of the concepts companies discussed in the terms of service. So how would we expected the 88.75% of US households with social media accounts to be able to navigate such technical legalese.

With the average reading level in the United States being slightly over an 8th grade…

…the onus to protect the users of an application is shifting to the developers. As this shift occurs and we have the same public outcries due to data breaches or research like the Facebook contagion study we must explore if these developers have the tools to make ethical choices. Or if the companies should require them to be better trained and think through all the ethical implications.

These ethical issues are not new to technology or the Silicon Valley. Evidence of ethical issues in Technology can be found by reviewing the founding of the Markkula center in 1986. The purpose of the center was to provide silicon valley decision makers with the tools to properly practice ethics when making decisions. The founder of the center, and former Apple Chairman, Mike Markkula Jr. created this after he felt “[it was clear]”…that there were quite a few people who were in decision-making positions who just didn’t have ethics on their radar screen.” To him it was not that decision makers were being unethical but they didn’t have the tools needed to think ethically. Now the center serves as a location to provide training to companies with regards to technology, AI, and machine learning. This has lead to larger companies like Google to send a number of employees to train at the Markkula center and has since allowed them to develop a fairness module to train developers on the notion of fairness and ethics.  More importantly after its creation google moved to allow the module to be publicly available as it felt the onus of protecting the users of their virtual world fell on the System developers. Googles fairness module even signifies this by stating “As ML practitioners build, evaluate, and deploy machine learning models, they should keep fairness considerations (such as how different demographics of people will be affected by a model’s predictions) in the forefront of their minds.”

It is clear from Googles stance and the growing course work at some public universities that an oath of no harm is needed in technology and is making its way into the education of developers. Such large paradigm shifts regarding ethics by these companies shows the increasing importance for them to train employees. The public view has shifted on them to not only state their ethical views but to prove it with actions and by making items like the fairness module available publicly it provides the groundwork to eventually have it mandatory in the Technology sector and for the Developers.

References:
1. National Institute of Health. (2012, February 07). Greek Medicine: “I Swear by Apollo Physician …” Greek Medicine from the Gods to Galen. www.nlm.nih.gov/hmd/greek/greek_oath.html
2. Statista Research Department. (2021, June 15). Social media usage in the United States – Statistics & Facts. www.statista.com/topics/3196/social-media-usage-in-the-united-states/#dossierKeyfigures
3. Wriber. (Accessed on 2021, October 27). A case for writing below a grade 8 reading level. wriber.com/writing-below-a-grade-8-reading-level/
4. Kinster, L. (2020, February 2020). “Ethicists were hired to save tech’s soul. Will anyone let them?”. www.protocol.com/ethics-silicon-valley
5. Kleinfeld, S (2018, October 18). “A new course to teach people about fairness in machine learning”. www.blog.google/technology/ai/new-course-teach-people-about-fairness-machine-learning/

Are the kids alright?

Are the kids alright?
By Anonymous | October 29, 2021

Today 84% of teenagers own a cellphone in the US . Further, teens spend an average of 9 hours per day online. While half of parents with teenagers aged 14 to 17 say they are “extremely” or “very aware” of what their kids are doing online, only 30 percent of teens say their parents are “extremely” or “very aware” of what they’re doing online. There are plenty of books, resources and programs/applications to help parents track what their teens are doing online. However, in truth there are just as many ways for kids to get around these types of controls.

This is even more disturbing when we consider that privacy policies of many companies only protect children 13 and under, but do not apply to teenagers. This means that teens are treated as adults when it comes to privacy. For example TikTok, which is the number one app used by teenagers in the US today, states the following in their privacy policy:

By contrast here is an excerpt from TikTok’s privacy policy for children under 13. It states clear retention and deletion processes.

While teens may be fine sharing their data with TikTok in what feels like a friendly community, they may not realize how many partners TikTok is sharing their data with. This list of partners includes ones that we might expect like payment processors, but it also includes advertising vendors that might be less expected/desirable.

In turn, each of these partners has their own data handling, retention, sharing, privacy and deletion policies and practices that are completely unknown to TikTok users.

What about the government?
While we might expect private corporations to do what is in their own best interests, even Congress has been slow to protect the privacy of teens. This week the Congressional subcommittee on Consumer Protection, Product Safety and Data Security questioned policy leaders from TikTok and Snap about the harmful effects of social media on kids and teens.

While these types of investigations are necessary and increase visibility into these companies’ opaque practices, the bottom line is that there are no formal protections for teens today. The Children’s Online Privacy Protection Act (COPPA), enacted in 1998, does impose certain restrictions on websites targeted at children, but only protects children 13 and under. The bill S.1628, which looks to amend COPPA to include protections to teenagers, was only introduced in May of this year . Additionally, there is the Kids Internet Design and Safety Act (KIDS) which was proposed last month to protect the online safety of children under 16. However, all this is still only under discussion – nothing has been approved.

What about protections such as GDPR and CCPA?
The General Data Protection Regulation (GDPR) which went into effect in Europe in 2018, was enacted to give European citizens more control over their data. It includes the “right to be forgotten” which states:

“The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay” if one of a number of conditions applies. “Undue delay” is considered to be about a month.

Similarly in the US, California has enacted the California Consumer Privacy Act (CCPA), which went into effect in 1998, extends similar protections to California residents. While it is likely that many other states will follow suit with similar types of protections, companies are able to interpret their implementation of these regulations as they see fit, and many are still figuring out exactly how to implement these policies tactically in their organizations. Until then teens will continue to create a digital footprint and audit trail that could follow them for many years into the future.

How do we move forward?
As we see, there are many places where privacy protections for teens break down. They are legally children, but have none of the protections that kids should have. Google this week announced that children (persons under the age of 18) or adults on their behalf have the ability to request that photos of them be removed from the search engine. This is a step in the right direction. However, we need more. We need governmental agencies to move more quickly to enact legislation to provide stronger, explicit protections for teens so that their privacy protections are not dictated by the whims of online companies – we owe them that much.

Sources:
“It’s A Smartphone Life: More Than Half Of US Children Now Have One.” 31 Oct. 2019, www.npr.org/2019/10/31/774838891/its-a-smartphone-life-more-than-half-of-u-s-children-now-have-one. Accessed 7 Oct. 2021.
“How much time does a teenager spend on social media?.” 31 May. 2021, www.mvorganizing.org/how-much-time-does-a-teenager-spend-on-social-media/. Accessed 25 Oct. 2021.
“Think You Know What Your Teens Do Online? – ParentMap.” 16 Jan. 2018, www.parentmap.com/article/teen-online-digital-internet-safety. Accessed 25 Oct. 2021.
“Text – S.1628 – 117th Congress (2021-2022): Children and Teens ….” www.congress.gov/bill/117th-congress/senate-bill/1628/text. Accessed 7 Oct. 2021.
“Google now lets people under 18 or their parents request to delete ….” 27 Oct. 2021, techcrunch.com/2021/10/27/how-to-delete-your-kids-pictures-google-search/. Accessed 28 Oct. 2021.

Trends in Modern Medicine and Drug Therapy

Trends in Modern Medicine and Drug Therapy
By Anonymous | October 11, 2021

The prescription drug industry has been a constant headline in the news over the past decade for a variety of reasons. Opioid addiction is probably the most prominent drawing attention to the negative aspects of prescription drug abuse. One of the current headlines and topics in congress is prescription drug costs which is a large issue for certain demographics who are unable to access drugs essential to their well being. Overshadowed are discussions of the benefits of drug therapy and the opportunities for advancement in the medical field through research and a combination of modernized and alternative methodologies.

Three interesting methodologies and fields of research that overlap with drug therapy are personalized medicine, patient engagement, and synergies between modern and traditional medicine. Interestingly, data collection, data analytics, and data science are important components of each. Below is a quick synopsis of these topics including some of the opportunities and challenges with the integration of data in the research. I include a number of research papers I reviewed at the end.

Patient engagement defined broadly is the practice of the patient being involved in decision making throughout their treatment. A key component of patient engagement is education in various aspects of one’s own personal health and the treatment options available. A key benefit is collection of better treatment intervention and outcome data.

One of the primary aspects of decision making in pursuing a treatment option is that the benefits outweigh the risks (fda). Patients which take an active role in their treatment and are more aware of the associated risks are naturally better able to minimize the negative effects. One common example of a risk is weight gain. Another benefit of patient engagement is better decision making with respect to lifestyle changes such as having children.

Patient engagement also creates the opportunity to gather better data through technological advances in smartphone devices and apps which allow patients to enter data or collect data through automatic sensors. Social media data is actually a common data source and it is tough to argue that the patient provided data is not a better alternative.

Traditional Medicine, also known as alternative medicine, are those which have been practiced by indigenous cultures and rely on natural products and therapies to provide health care treatment. Two examples include Traditional Chinese Medicine and Ayurveda of India. For the purposes of this discussion, I would broaden the field to the evolving use of natural products such as CBD’s and medicinal marijuana.

While the efficacy of alternative medicine is debated, it can probably be agreed that components of traditional medicine can provide practical medical benefits to modern health care. One of the main constraints of identifying these components is the access to data. In the case of Ayurveda, one researcher has proposed a data gathering framework combining a digital web portal, operational training of practitioners, and leveraging India’s vast infrastructure of medical organizations to gather and synthesize data (P. S. R. K. Haranath). As the developed world becomes more comfortable with alternative medicine, these types of data collection frameworks will be critical to formalizing treatments.

Personalized Medicine is the concept of medicine which can be tailored to the individual rather than a one-size fits all approach (Bayer). The complex scientific framework relies on biomarkers, biogenetics, and patient stratification to develop targeted treatment for individual patients.

Data analytics and data professionals will play a vital role in the R & D of personalized medicine and the pharmaceutical industry in general. Operationalized data is a key component to the research methodologies. Many obstacles exist with clinical data including the variety of data sources, types, and terminology, siloed data across the industry, and data privacy and security. Frameworks are being developed to lead to more data uniformity and promising efforts are being made to share data across organizations. With operationalized data, advanced predictive and prescriptive analytics can be conducted to develop customized treatments and decision support (Peter Tormay). Although complex, hopefully continued progress in research and application of data analytics will lead to incremental innovations for medical treatment.

The broader purpose of the discussion is to bring awareness and advocacy for these fields of research as healthcare data is a sensitive topic for patients. The opportunities with respect to data are also highlighted to help build confidence in the prospect of jobs in the fields of data engineering, data analytics, and data science in medicine. Hopefully, the long term results of this medical research will be to provide patients with more and better treatment options, increase treatment effectiveness and long term sustainability, and lower costs and increase availability.

Resource Materials:

Pharmaceutical Cost, R&D Challenges, and Personalized Medicine
Ten Challenges in Prescription Drug Market – Cost <www.brookings.edu/research/ten-challenges-in-the-prescription-drug-market-and-ten-solutions/&gt;
Big Data in Pharmaceutical R&D: Creating a Sustainable R&D Engine <link.springer.com/article/10.1007/s40290-015-0090-x&gt; Peter Tormay
Bayer’s Explanation of Personalized Medicine <www.bayer.com/en/news-stories/personalized-medicine-from-a-one-size-fits-all-to-a-tailored-approach&gt;

Patient Engagement and Centricity
Making Patient Engagement a Reality <www.ncbi.nlm.nih.gov/pmc/articles/PMC5766722/&gt;
Think It Through: Managing the Benefits and Risks of Medicines <www.fda.gov/drugs/information-consumers-and-patients-drugs/think-it-through-managing-benefits-and-risks-medicines&gt;
Patient Centricity and Pharmaceutical Companies: Is It Feasible? <journals.sagepub.com/doi/full/10.1177/2168479017696268&gt;

Traditional and Alternative Medicine
The Traditional Medicine and Modern Medicine from Natural Products <www.ncbi.nlm.nih.gov/pmc/articles/PMC6273146/&gt;
Role of pharmacology for integration of modern medicine and Ayurveda <www.ncbi.nlm.nih.gov/pmc/articles/PMC4621664/&gt; , P. S. R. K. Haranath
Number of States with Legalized Medical Marijuana <www.insidernj.com/press-release/booker-warren-call-doj-decriminalize-cannabis/&gt;

Prescription Drug Stats
hpi.georgetown.edu/rxdrugs/ <hpi.georgetown.edu/rxdrugs/&gt;
www.cdc.gov/nchs/data/hus/2019/039-508.pdf

Images
5 elements of successful patient engagement <hitconsultant.net/2015/07/17/5-elements-of-successful-patient-engagement/#.YWUB3bhKhyw&gt; – HIT Consultant News
Personalized Medicine Image <blog.crownbio.com/pdx-personalized-medicine&gt;

Disparities faced by people with sickle cell disease: The case for additional research funding.

Disparities faced by people with sickle cell disease: The case for additional research funding.
By Anonymous | September 22, 2021

September is Sickle Cell Awareness Month! How aware are you?
Sickle cell disease (SCD) is a group of inherited red blood cell disorders that lead to sickle-shaped red blood cells. These red blood cells then become hard and sticky, have a decreased ability to take up oxygen and a short life span in the body. As a result, people with SCD suffer many lifelong negative health outcomes and disability, ranging from stroke (even in children), acute/chronic pain crises, mental health issues, organ failure, etc.


Sickle cell disease clinical complications. Source: Kato, G., Piel, F., Reid, C. et al. Sickle cell disease. Nat Rev Dis Primers 4, 18010 (2018). doi.org/10.1038/nrdp.2018.10

The issues unfortunately do not end there.

In the U.S., 90% of people living with SCD are Black/African-American and suffer significant health disparities and stigmatization. SCD patients frequently describe poor interactions with the healthcare system and personnel. SCD pain crises (which can be more intense than childbirth) require strong pain relievers such as opioids. Unfortunately, few medical facilities are trained in managing SCD pain crises or other chronic issues related to SCD. There is a serious lack of respect for persons, as people coming in for SCD pain crises are often treated as though they are drug seekers while being denied the care they need. Black people have also expressed substantial mistrust of the U.S. healthcare system, given past offenses in recent history (e.g. lack of consent given to Black participants in the Tuskegee Syphilis Study).

In fact, current estimates place those with SCD at a median life expectancy of 42–47 years (2016 estimate). Even though SCD is a rare disease in the U.S. (approximately 90,000-100,000 people), over $811.4 million in healthcare costs can be attributed to 95,600 hospital stays and readmissions for sickle cell disease (2016 estimate).


Sickle cell disease-related hospital readmission is high in the U.S. Source: AHRQ News Now, September 10, 2019, Issue #679

Why is this happening??
There is not enough research or education surrounding SCD. Sickle cell advocacy groups and networks exist, but they are underfunded or there is less interest from the public compared to other health issues. For example, Farooq et al. (2020) compared the amount of support that goes into SCD with cystic fibrosis (approximately 30,000 people in the U.S.). They found that the U.S. government-funded research was on average $812 per person affected for SCD (which predominantly affects Black people), compared to $2807 per person affected for cystic fibrosis (which predominantly affects Caucasians). What about funding from private foundations? On average $102 per person for SCD and $7690 per person for cystic fibrosis.

The result?
There is lack of research and data surrounding SCD, potential new treatments, factors that affect SCD, etc. To improve quality of care, we need to understand the complex intersectionality at play with SCD… how do available treatments vary by SCD genotype? By sexual/gender minority group? By region? Without such information, there is substantial uncertainty that no data model can make up for, and residuality occurs when people with SCD are just left out of the decisions being made. Without additional funding, a lack of training in SCD will persist, with hidden biases of health care workers negatively affecting the patient-provider experience. Even among well-educated researchers, there is the notion that SCD has been “cured” and is not a priority anymore. Unfortunately, very few people meet the criteria to undergo gene therapy (must have an exact match, a specific SCD genotype, etc.), it is extremely expensive and is a risky procedure with risk of death.

Will more funding for SCD research and advocacy really help? Well, in the same time period, given over 70% more research publications for SCD vs. cystic fibrosis, SCD had one FDA-approved drug between 2008-2018 (to serve a population of 90,000) while cystic fibrosis had four (to serve a population of 30,000).

How can we do better?
Increased advocacy for SCD as a priority and overall awareness by lawmakers, researchers, and the general population is important. Given the huge disparity in private funding between SCD and cystic fibrosis, Farooq et al. suggest that increased philanthropy through private foundations may be a way to improve on SCD advocacy and research. Consider donating to established SCD foundations, or advocate to your public service officials on what SCD is and why it’s important. Share materials such as from World Sickle Cell Day or Sickle Cell Awareness Month (September). Support a summer camp for kids with SCD, where they go have fun and be further empowered while living with SCD. Everything counts. Together, we can reduce disparities and improve health outcomes for people with SCD.

Swiping Right but for Who?: When dating meets data

Swiping Right but for Who?: When dating meets data
By Elaine Chang | September 22, 2021

Online dating is a mainstay of modern culture. A study from Stanford found that around 40% of American couples initially met online, the most common medium of matchmaking. The act of swiping itself has elevated into its own set of idioms in the English language and culture – “swipe left” signals declining a potential match, “swipe right” indicates accepting a potential match. And there are so many online dating apps to choose from, depending on who you ask and what experience you may be looking for.


There is a striking shift in the matchmaking medium as American couples increasingly meet via online dating apps / websites. We should note 1) there have been many many changes from 1995 to 2017 that enabled online dating not previously possible and 2) this study is a slice of what dating looks like (American, heterosexual, technologically able)

The pandemic has only increased the use of these handheld matchmaking services – Tinder recorded its highest number of swipes on a single day totaling to 3 billion in March 2020 while OkCupid saw a 700% increase in dates from March to May 2020. Individuals typically assume that these interactions – whether direct or indirect – are private. Its a digital space for individuals to find their potential matches, get to know each other beyond the standard profile fields.


Online dating apps often ask personal questions as part of a user’s profile and offers features such as chatting via app

And while this advent of options and opportunity has undoubtedly stirred love in the digital airwaves, recent reports highlight a growing concern of what else is being aired out in the digital airwaves in the name of love. A NYTimes investigation found some disturbing details. It found that when they tested the Grindr Android app, the app shared specific latitude and longitude coordinate information with five companies. NYTimes researchers also found that the popular app OKCupid sent a user’s ethnicity and answers to profile questions such as whether they’ve used psychedelic drugs to a firm that helps companies tailor marketing messages to users. A Nightly News segment highlighted similarly disturbing practices on dating apps Hinge and Bumble. Hinge sent information of all its users to Facebook, whether they logged in via Facebook or not. Bumble sent encrypted information to undisclosed, outside companies. All companies cited their privacy policy when followed up by reporters.


Apps, such as dating apps, are sending information to different companies. Some locations are
disclosed while others are not.

These revelations show a few worrying trends related to users and their data. First, profiling of users on their own device without explicit and informed consent. Businesses are quietly creating profiles of individuals using activity, targeting them with ads and attempting to influence their behavior – all on a user’s personal device and all relatively unbeknownst to the user themselves.
Secondly, the treatment of user data both from a lack of privacy perspective as well as a lack of respect. A recent study done by the Norwegian Consumer Council suggested that some companies treat personal and intimate information of a user, such as gender preference similar to innocuous data points about the individual such as favorite food. Finally and more broadly, as
these dating apps increasingly influence love and relationships as we know it, their profile options and app interactions are then to increasingly define culture. There is a responsibility of how to do so thoughtfully and responsibly  (e.g., gender pronouns, features in the time of corona).

Ultimately, consumers trust their device and by extension apps (not only but including dating apps). Companies may need to be compelled to change through updated rules and regulation. Until then, it may behoove us all to swipe left more often on the next potential app download.

References
qz.com/1546677/around-40-of-us-couples-now-first-meet-online/
fortune.com/2021/02/12/covid-pandemic-online-dating-apps-usage-tinder-okcupid-bumble-meet-group/

The Unequal American Dream: Hidden Bias in Mortgage Lending AI/ML Algorithms

The Unequal American Dream: Hidden Bias in Mortgage Lending AI/ML Algorithms
By Autumn Rains | September 17, 2021

Owning a home in the United States is a cornerstone of the American Dream. Despite the economic downturn from the Covid-19 pandemic, the U.S. housing market saw double-digit growth rates in home pricing and equity appreciation in 2021. According to the Federal Housing Finance Agency, U.S. house prices grew 17.4 percent in the second quarter of 2021 versus 2020 and increased 4.9 percent from the first quarter of 2021 (U.S. House Price Index Report, 2021). Given these figures, obtaining a mortgage loan has further become vital to the home buying process for potential homeowners. With advancements in Machine Learning within financial markets, mortgage lenders have opted to introduce digital products to speed up the mortgage lending process and serve a broader, growing customer base.

Unfortunately, the ability to obtain a mortgage from lenders is not equal for all potential homeowners due to bias within the algorithms of these digital products. According to the Consumer Financial Protection Bureau (Mortgage refinance loans, 2021):

“Initial observations about the nation’s mortgage market in 2020 are welcome news, with improvements in the overall volume of home-purchase and refinance loans compared to 2019,” said CFPB Acting Director Dave Uejio. “Unfortunately, Black and Hispanic borrowers continued to have fewer loans, be more likely to be denied than non-Hispanic White and Asian borrowers, and pay higher median interest rates and total loan costs. It is clear from that data that our economic recovery from the COVID-19 pandemic won’t be robust if it remains uneven for mortgage borrowers of color.”

New Levels of Discrimination? Or Perpetuation of History?
Exploring the history of mortgage lending in the United States, discrimination based on race has been an undertone in our history. Housing programs under ‘The New Deal’ in 1933 were forms of segregation. People of color were not included in new suburban communities and instead placed into urban housing projects. The following year, the Federal Housing Administration (FHA) was established and created a policy known as ‘redlining.’ This policy furthered segregation for people of color by refusing to issue mortgages for properties in or near African-American neighborhoods. While this policy was in effect, the FHA also offered subsidies for builders who prioritized suburban development project builds, requiring that builders sold none of these homes to African-Americans (Gross, 2017).

Bias in the Algorithms
Researchers at UC Berkeley Haas School of Business discovered that black and Latino borrowers were charged higher interest rates of 7.9 bps both online and in-person in 2019 (Public Affairs & Affairs, 2018). Similarly, The Markup also explored this bias in mortgage lending and found the following about national loan rates:

Holding 17 different factors steady in a complex statistical analysis of more than two million conventional mortgage applications for home purchases, we found that lenders were 40 percent more likely to turn down Latino applicants for loans, 50 percent more likely to deny Asian/Pacific Islander applicants, and 70 percent more likely to deny Native American applicants than similar White applicants. Lenders were 80 percent more likely to reject Black applicants than similar White applicants. […] In every case, the prospective borrowers of color looked almost exactly the same on paper as the White applicants, except for their race.

Mortgage lenders approach the digital lending process similarly to traditional banks regarding risk evaluation criteria. These criteria include income, assets, credit score, current debt, and liabilities, among other factors in line with federal guidelines. The Consumer Finance Protection Bureau issued guidelines after the last recession to reduce the risk of predatory lending to consumers. (source) If a potential home buyer does not meet these criteria, they are classified as a risk. These criteria do tend to put people of color at a disadvantage. For example, credit scores are typically calculated based on individual spending and payment habits. Rental payments are typically the most significant payment individuals pay routinely, but these generally are not reported to credit bureaus by landlords. According to an article in the New York Times (Miller, 2020), more than half of Black Americans pay rent. Alanna McCargo, Vice President of housing finance policy at the Urban Institute, further elaborates within the article:

“We know the wealth gap is incredibly large between white households and households of color,” said Alanna McCargo, the vice president of housing finance policy at the Urban Institute. “If you are looking at income, assets and credit — your three drivers — you are excluding millions of potential Black, Latino and, in some cases, Asian minorities and immigrants from getting access to credit through your system. You are perpetuating the wealth gap.” […] As of 2017, the median household income among Black Americans was just over $38,000, and only 20.6 percent of Black households had a credit score above 700.”

 

Remedies for Bias
Potential solutions to reduce hidden bias in the mortgage lending algorithms could include widening the data criteria used for risk evaluation decisions. However, some demographic factors about an individual cannot be considered according to the law. The Fair Housing Act of 1968 states that within mortgage underwriting, lenders cannot consider sex, religion, race, or marital status as part of the evaluation. However, these may be factors by proxy through variables like timeliness of bill payments, a part of the credit score evaluation previously discussed. If Data Scientists have additional data points beyond the scope of the recommended guidelines of the Consumer Finance Protection Bureau, should these be considered? If so, do any of these extra data points include bias directly or by proxy? These considerations pose quite a dilemma for Data Scientists, digital mortgage lenders, and companies involved in credit modeling.

Another potential solution in the digital mortgage lending process could be the inclusion of a diverse team of loan officers in the final step of the risk evaluation process. Until lenders can place higher confidence in the ability of AI/ML algorithms to reduce hidden bias, loan officers should be involved to ensure fair access for all consumers. Tangentially, alternative credit scoring models that include rental history payments should be considered by Data Scientists at mortgage lenders with digital offerings. By doing so, lenders can create a more holistic picture of potential homeowners’ total spending and payment history. This would allow all U.S. residents the equal opportunity to pursue the American dream of homeownership in a time when working from home is a new reality.

 

Works Cited

Image Sources

The Battle Between Corporations and Data Privacy

The Battle Between Corporations and Data Privacy
By Anonymous | September 17, 2021

With each user’s growing digital footprint should come an increase in liability and responsibility for companies. Unfortunately, this isn’t always the case. It’s not surprising that data rights aren’t at the top of the to-do list given that more data usually comes hand in hand with steeply increasing targeted ad revenue, conversion rates and customer insights revenue. Naturally, the question arises: where and how do we draw the line between justifiable company data usage and a data privacy breach?

Preliminary Legal Measures
Measures like General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have set a great precedent for trying to systematically answer this question, but they’re far from widely accepted. The former was enacted in the EU and it allows residents to have control over their data. Companies now have to release information about what information is collected/stored, and customers need to provide consent for data to be collected or to be used to marketing purposes. However, companies have found a loophole by just created 2 streams of data collection (one for the EU and one for countries outside of it) instead of changing data policies world-wide. The latter only impacts the state of California, and not many other states have followed suit. Seeing as state by state policies can greatly complicate compliance and data storage measures, companies have actually stepped up to influence and speed up these measures.

Big-Tech’s Influence on Legislation
Surprisingly, Big-Tech (Facebook, Amazon, Microsoft, Google, Apple, etc.) is actually on the front lines of pushing states to pass privacy laws; although it’s not so much a heroic act of benevolence as it is a crafty way to control the stringency of privacy measures put into place. In fact, Virginia’s recently passed Privacy Law was reportedly co-authored by Amazon and Microsoft, and it’s now in consideration in 14 other states using the same exact or even a weaker legal framework. These bills are strongly backed by all of Big-Tech and are quickly moving through the process due to pressure from countless company lobbyists. The biggest impact of these bills is that consumers cannot sue companies for violations of the law. Another key point is that the default setting for users is to opt into tracking unless the user combs through the settings to opt out of it. The industry is counting on the idea that if the country is flooded with these weaker state laws, they will essentially be able to disregard the harsher state laws like the CCPA. In figure below, you can see just how much companies are spending on legislation within one state:


Image: The amount of money spent on lobbying in Connecticut by Big-Tech

Good News on the Horizon
However, it’s important to note that this doesn’t mean that data privacy is a lost cause and that legislation is not effective. Indeed, there are some corporations taking privacy into their own hands and creating large scale impacts. The most scalable of which, is Apple, which released a new data privacy measure that now requires every user to knowingly opt in or out of data tracking for every single app they use. While this was met with heavy backlash from ad-revenue and user data dependent companies such as Facebook, Apple has remained firm in its decision to mandate user opt-in permission for data tracking. Their decision has resulted in less than 33% of iOS users opting in to the tracking which is a massive hit to the ad-tech industry.

Furthermore, as iOS users have been opting out of tracking, advertisers can’t bid on them so the lack of iOS users has driven up advertisement demand for Android users. As a result, Android ad prices are now about 30% higher than ad prices for iOS users, and companies are choosing to move their ads to Android powered devices. For some context, digital-ad agency Tinuiti’s Facebook clients went from year-over-year spend of 46% for Android users in May to 64% in June. The clients’ iOS spending saw a corresponding slowdown, from 42% in May to 25% in June. Despite these drawbacks, this move alone is forcing companies everywhere to change their data tracking policies because while they’re escaping state and federal privacy measures, they’re getting blocked by wide-reaching, software platform-based privacy laws.

References

The intersection of public health and data privacy with vaccine passports

The intersection of public health and data privacy with vaccine passports
By Anonymous | September 17, 2021

Countries, states, and cities are implementing and enforcing vaccine passports. The use of vaccine passports is to provide individuals greater protection against the spread of COVID-19; however, the safety provided comes with concerns over data privacy and ensuring its safe protection, too. On the one hand, vaccine passports supply a universal and standardized solution for ensuring individuals are vaccinated when entering high-exposure areas, such as traveling and large indoor gatherings. With the standardization comes data privacy risks and concerns with respect to the Fair Information Practice Principles.

Return to Normalcy
Since the beginning of the pandemic, travelling and tourism declined due to legal restrictions coupled with peoples’ fear of contracting the virus during travel. Vaccine passports give individuals the relief of knowing that others around them are vaccinated, too, while businesses receive an opportunity to attract more customers. The chart on the left illustrates the dip in tourism and flight travel during the pandemic; whereas the chart on the right shows the global recognition of multiple vaccines. All to indicate that there are several vaccines that are recognized around the world for potential vaccine passports and that the travel and tourism industries would benefit from such programs.

Not only do businesses benefit from vaccine passports as it could attract more customers to return, but unemployed workers would as well. Unemployed individuals would benefit from vaccine passports as it would trigger an increase in customer activity, which, in turn, increases the need for businesses to hire more. The image below visualizes the hardest hit sectors by change in employment. The largest negative changes were seen in industries that rely on large gatherings and crowds. Thus, if vaccine passports can bring us back to normalcy faster, then businesses can recover faster and more people can be re-hired.

Transparency
The European Union is rolling out a vaccine passport with its data privacy grounded in the GDPR framework, which addresses transparency – individuals should be given detailed information on how the data will be collected, used, and maintained – with the GDPR’s transparency, purpose limitation, and security principles. Through the GDPR, individuals will have a transparent understanding of the purpose for which the data will be used, and only for that purpose, and be ensured that the data will be “processed in a manner that ensures appropriate security of the personal data” (ICO 2018). However, this only applies to the individuals with the EU vaccine passport. There are other countries, Malaysia and China for example, that do not follow GDPR as the basis of its data transparency for vaccine passports. This can cause a concern for how the data could be used for other purposes post-pandemic. A vaccine passport gives transparency to businesses and governments on the vaccination status, the individuals participating should receive the same level of transparency on how their data will be stored and for what direct purposes.

Individual Participation & Purpose Specifications
Individual participation with vaccine passports comes into question as countries and governments that require vaccine passports to attend indoor dining, large indoor events, etc. force participation. If an individual wants to participate in such events, then their participation in enrolling in a vaccine passport system is required. This forced consent for an individual to provide data to be able to enjoy these activities causes an ethical dilemma of what other activities could soon require individuals to use a vaccine passport and risk their personal data privacy. In addition, the term length of vaccine passports is unknown as the pandemic continues to fluctuate, which causes issues with the purpose specifications principle – clearly stated uses of the collected data. The issue is that individuals that provide their personal information for entry into a vaccination program may not know how long their data will be kept as the use case for it could continue to be extended if never retired.

Accountability and Auditing
With the United States rejecting a federal vaccine passport, states, cities, and private entities have developed and instituted their own vaccine approval programs. The uncoordinated effort for a single, standardized program within the U.S. brings attention to accountability and auditing problems in ensuring proper training to all the people involved in the data collection, processing, and storage components. States and cities may have training programs for data collection, but private entities that are looking to rebound from a tough 2020 economic dip may not have the resources and time to train their employees and contractors on proper data privacy practices. Therefore, by the federal government not implementing a nationwide program, individuals that consent to providing their data for vaccine-proof certification may risk potential data concerns with a lack of training for people collecting and using the data.

Summary
Vaccine passports have great potential in limiting the spread of the virus by giving individuals and organizations visibility and assurance of vaccination status for large groups. However, vaccine certification programs need to give individuals transparency into the clear and specific uses of their information, provide term limits for purpose specifications, and ensure that people who will be collecting, using, and storing the data are properly trained in data privacy practices. If these concerns are addressed, then we could see more adoption of vaccine passports to combat the spread of the virus. If not, then individuals’ mistrust of data privacy will persist and returning back to normalcy may take longer than hoped.

Elevator Pitch
Vaccine passports have great potential in limiting the spread of the virus by giving individuals and organizations visibility and assurance of vaccination status for large groups. However, vaccine certification programs need to give individuals transparency into the clear and specific uses of their information, provide term limits for purpose specifications, and ensure that people who will be collecting, using, and storing the data are properly trained in data privacy practices. If these concerns are addressed, then we could see more adoption of vaccine passports to combat the spread of the virus. If not, then individuals’ mistrust of data privacy will persist and returning back to normalcy may take longer than hoped.

 

References