Disparities faced by people with sickle cell disease: The case for additional research funding.

Disparities faced by people with sickle cell disease: The case for additional research funding.
By Anonymous | September 22, 2021

September is Sickle Cell Awareness Month! How aware are you?
Sickle cell disease (SCD) is a group of inherited red blood cell disorders that lead to sickle-shaped red blood cells. These red blood cells then become hard and sticky, have a decreased ability to take up oxygen and a short life span in the body. As a result, people with SCD suffer many lifelong negative health outcomes and disability, ranging from stroke (even in children), acute/chronic pain crises, mental health issues, organ failure, etc.


Sickle cell disease clinical complications. Source: Kato, G., Piel, F., Reid, C. et al. Sickle cell disease. Nat Rev Dis Primers 4, 18010 (2018). https://doi.org/10.1038/nrdp.2018.10

The issues unfortunately do not end there.

In the U.S., 90% of people living with SCD are Black/African-American and suffer significant health disparities and stigmatization. SCD patients frequently describe poor interactions with the healthcare system and personnel. SCD pain crises (which can be more intense than childbirth) require strong pain relievers such as opioids. Unfortunately, few medical facilities are trained in managing SCD pain crises or other chronic issues related to SCD. There is a serious lack of respect for persons, as people coming in for SCD pain crises are often treated as though they are drug seekers while being denied the care they need. Black people have also expressed substantial mistrust of the U.S. healthcare system, given past offenses in recent history (e.g. lack of consent given to Black participants in the Tuskegee Syphilis Study).

In fact, current estimates place those with SCD at a median life expectancy of 42–47 years (2016 estimate). Even though SCD is a rare disease in the U.S. (approximately 90,000-100,000 people), over $811.4 million in healthcare costs can be attributed to 95,600 hospital stays and readmissions for sickle cell disease (2016 estimate).


Sickle cell disease-related hospital readmission is high in the U.S. Source: AHRQ News Now, September 10, 2019, Issue #679

Why is this happening??
There is not enough research or education surrounding SCD. Sickle cell advocacy groups and networks exist, but they are underfunded or there is less interest from the public compared to other health issues. For example, Farooq et al. (2020) compared the amount of support that goes into SCD with cystic fibrosis (approximately 30,000 people in the U.S.). They found that the U.S. government-funded research was on average $812 per person affected for SCD (which predominantly affects Black people), compared to $2807 per person affected for cystic fibrosis (which predominantly affects Caucasians). What about funding from private foundations? On average $102 per person for SCD and $7690 per person for cystic fibrosis.

The result?
There is lack of research and data surrounding SCD, potential new treatments, factors that affect SCD, etc. To improve quality of care, we need to understand the complex intersectionality at play with SCD… how do available treatments vary by SCD genotype? By sexual/gender minority group? By region? Without such information, there is substantial uncertainty that no data model can make up for, and residuality occurs when people with SCD are just left out of the decisions being made. Without additional funding, a lack of training in SCD will persist, with hidden biases of health care workers negatively affecting the patient-provider experience. Even among well-educated researchers, there is the notion that SCD has been “cured” and is not a priority anymore. Unfortunately, very few people meet the criteria to undergo gene therapy (must have an exact match, a specific SCD genotype, etc.), it is extremely expensive and is a risky procedure with risk of death.

Will more funding for SCD research and advocacy really help? Well, in the same time period, given over 70% more research publications for SCD vs. cystic fibrosis, SCD had one FDA-approved drug between 2008-2018 (to serve a population of 90,000) while cystic fibrosis had four (to serve a population of 30,000).

How can we do better?
Increased advocacy for SCD as a priority and overall awareness by lawmakers, researchers, and the general population is important. Given the huge disparity in private funding between SCD and cystic fibrosis, Farooq et al. suggest that increased philanthropy through private foundations may be a way to improve on SCD advocacy and research. Consider donating to established SCD foundations, or advocate to your public service officials on what SCD is and why it’s important. Share materials such as from World Sickle Cell Day or Sickle Cell Awareness Month (September). Support a summer camp for kids with SCD, where they go have fun and be further empowered while living with SCD. Everything counts. Together, we can reduce disparities and improve health outcomes for people with SCD.

Swiping Right but for Who?: When dating meets data

Swiping Right but for Who?: When dating meets data
By Elaine Chang | September 22, 2021

Online dating is a mainstay of modern culture. A study from Stanford found that around 40% of American couples initially met online, the most common medium of matchmaking. The act of swiping itself has elevated into its own set of idioms in the English language and culture – “swipe left” signals declining a potential match, “swipe right” indicates accepting a potential match. And there are so many online dating apps to choose from, depending on who you ask and what experience you may be looking for.


There is a striking shift in the matchmaking medium as American couples increasingly meet via online dating apps / websites. We should note 1) there have been many many changes from 1995 to 2017 that enabled online dating not previously possible and 2) this study is a slice of what dating looks like (American, heterosexual, technologically able)

The pandemic has only increased the use of these handheld matchmaking services – Tinder recorded its highest number of swipes on a single day totaling to 3 billion in March 2020 while OkCupid saw a 700% increase in dates from March to May 2020. Individuals typically assume that these interactions – whether direct or indirect – are private. Its a digital space for individuals to find their potential matches, get to know each other beyond the standard profile fields.


Online dating apps often ask personal questions as part of a user’s profile and offers features such as chatting via app

And while this advent of options and opportunity has undoubtedly stirred love in the digital airwaves, recent reports highlight a growing concern of what else is being aired out in the digital airwaves in the name of love. A NYTimes investigation found some disturbing details. It found that when they tested the Grindr Android app, the app shared specific latitude and longitude coordinate information with five companies. NYTimes researchers also found that the popular app OKCupid sent a user’s ethnicity and answers to profile questions such as whether they’ve used psychedelic drugs to a firm that helps companies tailor marketing messages to users. A Nightly News segment highlighted similarly disturbing practices on dating apps Hinge and Bumble. Hinge sent information of all its users to Facebook, whether they logged in via Facebook or not. Bumble sent encrypted information to undisclosed, outside companies. All companies cited their privacy policy when followed up by reporters.


Apps, such as dating apps, are sending information to different companies. Some locations are
disclosed while others are not.

These revelations show a few worrying trends related to users and their data. First, profiling of users on their own device without explicit and informed consent. Businesses are quietly creating profiles of individuals using activity, targeting them with ads and attempting to influence their behavior – all on a user’s personal device and all relatively unbeknownst to the user themselves.
Secondly, the treatment of user data both from a lack of privacy perspective as well as a lack of respect. A recent study done by the Norwegian Consumer Council suggested that some companies treat personal and intimate information of a user, such as gender preference similar to innocuous data points about the individual such as favorite food. Finally and more broadly, as
these dating apps increasingly influence love and relationships as we know it, their profile options and app interactions are then to increasingly define culture. There is a responsibility of how to do so thoughtfully and responsibly  (e.g., gender pronouns, features in the time of corona).

Ultimately, consumers trust their device and by extension apps (not only but including dating apps). Companies may need to be compelled to change through updated rules and regulation. Until then, it may behoove us all to swipe left more often on the next potential app download.

References
https://qz.com/1546677/around-40-of-us-couples-now-first-meet-online/
https://fortune.com/2021/02/12/covid-pandemic-online-dating-apps-usage-tinder-okcupid-bumble-meet-group/

The Unequal American Dream: Hidden Bias in Mortgage Lending AI/ML Algorithms

The Unequal American Dream: Hidden Bias in Mortgage Lending AI/ML Algorithms
By Autumn Rains | September 17, 2021

Owning a home in the United States is a cornerstone of the American Dream. Despite the economic downturn from the Covid-19 pandemic, the U.S. housing market saw double-digit growth rates in home pricing and equity appreciation in 2021. According to the Federal Housing Finance Agency, U.S. house prices grew 17.4 percent in the second quarter of 2021 versus 2020 and increased 4.9 percent from the first quarter of 2021 (U.S. House Price Index Report, 2021). Given these figures, obtaining a mortgage loan has further become vital to the home buying process for potential homeowners. With advancements in Machine Learning within financial markets, mortgage lenders have opted to introduce digital products to speed up the mortgage lending process and serve a broader, growing customer base.

Unfortunately, the ability to obtain a mortgage from lenders is not equal for all potential homeowners due to bias within the algorithms of these digital products. According to the Consumer Financial Protection Bureau (Mortgage refinance loans, 2021):

“Initial observations about the nation’s mortgage market in 2020 are welcome news, with improvements in the overall volume of home-purchase and refinance loans compared to 2019,” said CFPB Acting Director Dave Uejio. “Unfortunately, Black and Hispanic borrowers continued to have fewer loans, be more likely to be denied than non-Hispanic White and Asian borrowers, and pay higher median interest rates and total loan costs. It is clear from that data that our economic recovery from the COVID-19 pandemic won’t be robust if it remains uneven for mortgage borrowers of color.”

New Levels of Discrimination? Or Perpetuation of History?
Exploring the history of mortgage lending in the United States, discrimination based on race has been an undertone in our history. Housing programs under ‘The New Deal’ in 1933 were forms of segregation. People of color were not included in new suburban communities and instead placed into urban housing projects. The following year, the Federal Housing Administration (FHA) was established and created a policy known as ‘redlining.’ This policy furthered segregation for people of color by refusing to issue mortgages for properties in or near African-American neighborhoods. While this policy was in effect, the FHA also offered subsidies for builders who prioritized suburban development project builds, requiring that builders sold none of these homes to African-Americans (Gross, 2017).

Bias in the Algorithms
Researchers at UC Berkeley Haas School of Business discovered that black and Latino borrowers were charged higher interest rates of 7.9 bps both online and in-person in 2019 (Public Affairs & Affairs, 2018). Similarly, The Markup also explored this bias in mortgage lending and found the following about national loan rates:

Holding 17 different factors steady in a complex statistical analysis of more than two million conventional mortgage applications for home purchases, we found that lenders were 40 percent more likely to turn down Latino applicants for loans, 50 percent more likely to deny Asian/Pacific Islander applicants, and 70 percent more likely to deny Native American applicants than similar White applicants. Lenders were 80 percent more likely to reject Black applicants than similar White applicants. […] In every case, the prospective borrowers of color looked almost exactly the same on paper as the White applicants, except for their race.

Mortgage lenders approach the digital lending process similarly to traditional banks regarding risk evaluation criteria. These criteria include income, assets, credit score, current debt, and liabilities, among other factors in line with federal guidelines. The Consumer Finance Protection Bureau issued guidelines after the last recession to reduce the risk of predatory lending to consumers. (source) If a potential home buyer does not meet these criteria, they are classified as a risk. These criteria do tend to put people of color at a disadvantage. For example, credit scores are typically calculated based on individual spending and payment habits. Rental payments are typically the most significant payment individuals pay routinely, but these generally are not reported to credit bureaus by landlords. According to an article in the New York Times (Miller, 2020), more than half of Black Americans pay rent. Alanna McCargo, Vice President of housing finance policy at the Urban Institute, further elaborates within the article:

“We know the wealth gap is incredibly large between white households and households of color,” said Alanna McCargo, the vice president of housing finance policy at the Urban Institute. “If you are looking at income, assets and credit — your three drivers — you are excluding millions of potential Black, Latino and, in some cases, Asian minorities and immigrants from getting access to credit through your system. You are perpetuating the wealth gap.” […] As of 2017, the median household income among Black Americans was just over $38,000, and only 20.6 percent of Black households had a credit score above 700.”

 

Remedies for Bias
Potential solutions to reduce hidden bias in the mortgage lending algorithms could include widening the data criteria used for risk evaluation decisions. However, some demographic factors about an individual cannot be considered according to the law. The Fair Housing Act of 1968 states that within mortgage underwriting, lenders cannot consider sex, religion, race, or marital status as part of the evaluation. However, these may be factors by proxy through variables like timeliness of bill payments, a part of the credit score evaluation previously discussed. If Data Scientists have additional data points beyond the scope of the recommended guidelines of the Consumer Finance Protection Bureau, should these be considered? If so, do any of these extra data points include bias directly or by proxy? These considerations pose quite a dilemma for Data Scientists, digital mortgage lenders, and companies involved in credit modeling.

Another potential solution in the digital mortgage lending process could be the inclusion of a diverse team of loan officers in the final step of the risk evaluation process. Until lenders can place higher confidence in the ability of AI/ML algorithms to reduce hidden bias, loan officers should be involved to ensure fair access for all consumers. Tangentially, alternative credit scoring models that include rental history payments should be considered by Data Scientists at mortgage lenders with digital offerings. By doing so, lenders can create a more holistic picture of potential homeowners’ total spending and payment history. This would allow all U.S. residents the equal opportunity to pursue the American dream of homeownership in a time when working from home is a new reality.

 

Works Cited

  • Gross, T. (2017, May 3). A ‘forgotten history’ of how the U.S. government segregated America. NPR. Retrieved September 17, 2021, from https://www.npr.org/2017/05/03/526655831/a-forgotten-history-of-how-the-u-s-government-segregated-america.
  • Miller, J. (2020, September 18). Is an algorithm less racist than a loan officer? The New York Times. Retrieved September 17, 2021, from https://www.nytimes.com/2020/09/18/business/digital-mortgages.html.
  • Mortgage refinance loans drove an increase in closed-end originations in 2020, new CFPB report finds. Consumer Financial Protection Bureau. (2021, August 19). Retrieved September 17, 2021, from https://www.consumerfinance.gov/about-us/newsroom/mortgage-refinance-loans-drove-an-increase-in-closed-end-originations-in-2020-new-cfpb-report-finds/.
  • Public Affairs, U. C. B. N. 13, & Affairs, P. (2018, November 13). Mortgage algorithms perpetuate racial bias in lending, study finds. Berkeley News. Retrieved September 17, 2021, from https://news.berkeley.edu/story_jump/mortgage-algorithms-perpetuate-racial-bias-in-lending-study-finds/.
  • U.S. House Price Index Report 2021 Q2. U.S. House Price Index Report 2021 Q2 | Federal Housing Finance Agency. (2021, August 31). Retrieved September 17, 2021, from https://www.fhfa.gov/AboutUs/Reports/Pages/US-House-Price-Index-Report-2021-Q2.aspx.

Image Sources

  • Picture 1: https://www.dcpolicycenter.org/wp-content/uploads/2018/10/Location_map_of_properties_and_projects-778×1024.jpg
  • Picture 2: https://static01.nyt.com/images/2019/12/08/business/06View-illo/06View-illo-superJumbo.jpg

The Battle Between Corporations and Data Privacy

The Battle Between Corporations and Data Privacy
By Anonymous | September 17, 2021

With each user’s growing digital footprint should come an increase in liability and responsibility for companies. Unfortunately, this isn’t always the case. It’s not surprising that data rights aren’t at the top of the to-do list given that more data usually comes hand in hand with steeply increasing targeted ad revenue, conversion rates and customer insights revenue. Naturally, the question arises: where and how do we draw the line between justifiable company data usage and a data privacy breach?

Preliminary Legal Measures
Measures like General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have set a great precedent for trying to systematically answer this question, but they’re far from widely accepted. The former was enacted in the EU and it allows residents to have control over their data. Companies now have to release information about what information is collected/stored, and customers need to provide consent for data to be collected or to be used to marketing purposes. However, companies have found a loophole by just created 2 streams of data collection (one for the EU and one for countries outside of it) instead of changing data policies world-wide. The latter only impacts the state of California, and not many other states have followed suit. Seeing as state by state policies can greatly complicate compliance and data storage measures, companies have actually stepped up to influence and speed up these measures.

Big-Tech’s Influence on Legislation
Surprisingly, Big-Tech (Facebook, Amazon, Microsoft, Google, Apple, etc.) is actually on the front lines of pushing states to pass privacy laws; although it’s not so much a heroic act of benevolence as it is a crafty way to control the stringency of privacy measures put into place. In fact, Virginia’s recently passed Privacy Law was reportedly co-authored by Amazon and Microsoft, and it’s now in consideration in 14 other states using the same exact or even a weaker legal framework. These bills are strongly backed by all of Big-Tech and are quickly moving through the process due to pressure from countless company lobbyists. The biggest impact of these bills is that consumers cannot sue companies for violations of the law. Another key point is that the default setting for users is to opt into tracking unless the user combs through the settings to opt out of it. The industry is counting on the idea that if the country is flooded with these weaker state laws, they will essentially be able to disregard the harsher state laws like the CCPA. In figure below, you can see just how much companies are spending on legislation within one state:


Image: The amount of money spent on lobbying in Connecticut by Big-Tech

Good News on the Horizon
However, it’s important to note that this doesn’t mean that data privacy is a lost cause and that legislation is not effective. Indeed, there are some corporations taking privacy into their own hands and creating large scale impacts. The most scalable of which, is Apple, which released a new data privacy measure that now requires every user to knowingly opt in or out of data tracking for every single app they use. While this was met with heavy backlash from ad-revenue and user data dependent companies such as Facebook, Apple has remained firm in its decision to mandate user opt-in permission for data tracking. Their decision has resulted in less than 33% of iOS users opting in to the tracking which is a massive hit to the ad-tech industry.

Furthermore, as iOS users have been opting out of tracking, advertisers can’t bid on them so the lack of iOS users has driven up advertisement demand for Android users. As a result, Android ad prices are now about 30% higher than ad prices for iOS users, and companies are choosing to move their ads to Android powered devices. For some context, digital-ad agency Tinuiti’s Facebook clients went from year-over-year spend of 46% for Android users in May to 64% in June. The clients’ iOS spending saw a corresponding slowdown, from 42% in May to 25% in June. Despite these drawbacks, this move alone is forcing companies everywhere to change their data tracking policies because while they’re escaping state and federal privacy measures, they’re getting blocked by wide-reaching, software platform-based privacy laws.

References

  • https://themarkup.org/privacy/2021/04/15/big-tech-is-pushing-states-to-pass-privacy-laws-and-yes-you-should-be-suspicious
  • https://www.coreview.com/blog/alpin-gdpr-fines-list/
  • https://www.bloomberg.com/news/articles/2021-07-14/facebook-fb-advertisers-impacted-by-apple-aapl-privacy-ios-14-changes
  • https://www.facebook.com/business/help/331612538028890?id=428636648170202
  • https://www.theverge.com/2020/3/3/21153117/congress-tech-regulation-privacy-bill-coppa-ads-laws-legislators
  • https://www.macworld.com/article/344420/app-tracking-transparency-privacy-ad-tracking-iphone-ipad-how-to-change-settings.html

The intersection of public health and data privacy with vaccine passports

The intersection of public health and data privacy with vaccine passports
By Anonymous | September 17, 2021

Countries, states, and cities are implementing and enforcing vaccine passports. The use of vaccine passports is to provide individuals greater protection against the spread of COVID-19; however, the safety provided comes with concerns over data privacy and ensuring its safe protection, too. On the one hand, vaccine passports supply a universal and standardized solution for ensuring individuals are vaccinated when entering high-exposure areas, such as traveling and large indoor gatherings. With the standardization comes data privacy risks and concerns with respect to the Fair Information Practice Principles.

Return to Normalcy
Since the beginning of the pandemic, travelling and tourism declined due to legal restrictions coupled with peoples’ fear of contracting the virus during travel. Vaccine passports give individuals the relief of knowing that others around them are vaccinated, too, while businesses receive an opportunity to attract more customers. The chart on the left illustrates the dip in tourism and flight travel during the pandemic; whereas the chart on the right shows the global recognition of multiple vaccines. All to indicate that there are several vaccines that are recognized around the world for potential vaccine passports and that the travel and tourism industries would benefit from such programs.

Not only do businesses benefit from vaccine passports as it could attract more customers to return, but unemployed workers would as well. Unemployed individuals would benefit from vaccine passports as it would trigger an increase in customer activity, which, in turn, increases the need for businesses to hire more. The image below visualizes the hardest hit sectors by change in employment. The largest negative changes were seen in industries that rely on large gatherings and crowds. Thus, if vaccine passports can bring us back to normalcy faster, then businesses can recover faster and more people can be re-hired.

Transparency
The European Union is rolling out a vaccine passport with its data privacy grounded in the GDPR framework, which addresses transparency – individuals should be given detailed information on how the data will be collected, used, and maintained – with the GDPR’s transparency, purpose limitation, and security principles. Through the GDPR, individuals will have a transparent understanding of the purpose for which the data will be used, and only for that purpose, and be ensured that the data will be “processed in a manner that ensures appropriate security of the personal data” (ICO 2018). However, this only applies to the individuals with the EU vaccine passport. There are other countries, Malaysia and China for example, that do not follow GDPR as the basis of its data transparency for vaccine passports. This can cause a concern for how the data could be used for other purposes post-pandemic. A vaccine passport gives transparency to businesses and governments on the vaccination status, the individuals participating should receive the same level of transparency on how their data will be stored and for what direct purposes.

Individual Participation & Purpose Specifications
Individual participation with vaccine passports comes into question as countries and governments that require vaccine passports to attend indoor dining, large indoor events, etc. force participation. If an individual wants to participate in such events, then their participation in enrolling in a vaccine passport system is required. This forced consent for an individual to provide data to be able to enjoy these activities causes an ethical dilemma of what other activities could soon require individuals to use a vaccine passport and risk their personal data privacy. In addition, the term length of vaccine passports is unknown as the pandemic continues to fluctuate, which causes issues with the purpose specifications principle – clearly stated uses of the collected data. The issue is that individuals that provide their personal information for entry into a vaccination program may not know how long their data will be kept as the use case for it could continue to be extended if never retired.

Accountability and Auditing
With the United States rejecting a federal vaccine passport, states, cities, and private entities have developed and instituted their own vaccine approval programs. The uncoordinated effort for a single, standardized program within the U.S. brings attention to accountability and auditing problems in ensuring proper training to all the people involved in the data collection, processing, and storage components. States and cities may have training programs for data collection, but private entities that are looking to rebound from a tough 2020 economic dip may not have the resources and time to train their employees and contractors on proper data privacy practices. Therefore, by the federal government not implementing a nationwide program, individuals that consent to providing their data for vaccine-proof certification may risk potential data concerns with a lack of training for people collecting and using the data.

Summary
Vaccine passports have great potential in limiting the spread of the virus by giving individuals and organizations visibility and assurance of vaccination status for large groups. However, vaccine certification programs need to give individuals transparency into the clear and specific uses of their information, provide term limits for purpose specifications, and ensure that people who will be collecting, using, and storing the data are properly trained in data privacy practices. If these concerns are addressed, then we could see more adoption of vaccine passports to combat the spread of the virus. If not, then individuals’ mistrust of data privacy will persist and returning back to normalcy may take longer than hoped.

Elevator Pitch
Vaccine passports have great potential in limiting the spread of the virus by giving individuals and organizations visibility and assurance of vaccination status for large groups. However, vaccine certification programs need to give individuals transparency into the clear and specific uses of their information, provide term limits for purpose specifications, and ensure that people who will be collecting, using, and storing the data are properly trained in data privacy practices. If these concerns are addressed, then we could see more adoption of vaccine passports to combat the spread of the virus. If not, then individuals’ mistrust of data privacy will persist and returning back to normalcy may take longer than hoped.

 

References

  • Baquet, D. (2021, April 22). The controversy over vaccination passports. The New York Times. Retrieved September 13, 2021, from https://www.nytimes.com/2021/04/22/opinion/letters/covid-vaccination-passports.html.
    BBC. (2021, July 26). Covid passports: How do they work around the world? BBC News. Retrieved September 14, 2021, from https://www.bbc.com/news/world-europe-56522408.
    Martin, G. (2021, April 28). Vaccine passports: Are they legal-or even a good idea? UC Berkeley Public Health. Retrieved September 14, 2021, from https://publichealth.berkeley.edu/covid-19/vaccine-passports-are-they-legal-or-even-a-good-idea/.
  • Schumaker, E. (2021, April 10). What to know about COVID-19 vaccine ‘passports’ and why they’re controversial. ABC News. Retrieved September 14, 2021, from https://abcnews.go.com/Health/covid-19-vaccine-passports-controversial/story?id=76928275.
  • The principles. ICO. (2018, May 25). Retrieved September 14, 2021, from https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/.
  • Turner-Lee, N., Lai, S., & Skahill, E. (2021, July 28). Vaccine passports underscore the necessity of U.S. privacy legislation. Brookings. Retrieved September 14, 2021, from https://www.brookings.edu/blog/techtank/2021/06/28/vaccine-passports-underscore-the-necessity-of-u-s-privacy-legislation/.

Apple Vs Facebook: Who’s Right is Your Data?

Apple Vs Facebook: Who’s Right is Your Data?
by Dan Ortiz | March 12, 2021


Photo by Markus Spiske from Pexels

Apple and Facebook are squaring off in the public domain over user privacy. In iOS 14.5, across all devices, app tracking features will transition from opt-out to opt-in and developers will be required to provide a justification for the tracking request in regards to third party tracking (App Tracking Transparency, User Privacy, App Privacy). As much as we are concerned that an app may spy on us through our camera, or sell our location data, this permission is to mitigate concerns an app is following us throughout our digital experience and logging interactions we have with other apps. Apple’s goal is to better inform users on the information each app is collecting and provide its users with more control over their data(. It is not to end user tracking or end personalized advertisements, but to increase transparency and get users consent prior to doing so. For people who prefer highly targeted ads, they accept the tracking request. For those who find it creepy and swear Facebook is listening in on their conversations, they can deny the request. Everyone gets what they want.

In response to the upcoming iOS updates, Facebook launched a very loud, very public campaign against the new policies claiming it will financially damage small businesses by limiting the effectiveness of personalized advertisements. At the core of this disagreement is who owns the data. Facebook phrases it like this “Apple’s policy could limit your ability to use your own data to show personalized ads to people who are likely to be interested in your business”. Clearly, Apple views the control of personal data as the right of the individual user, and Facebook believes they control that right.

Facebook’s argument claims that giving users the ability to say no to cross application tracking will hurt small businesses ability to serve personalized ads to potential customers, thus increasing the small business marketing costs. Facebook has taken out full page ads and have launched a campaign (Speak Up For Small Business). Even though iOS is only 17% of the global market, it has roughly 33% of the US population and average income for an iOS user tends to be 40% higher than an Android user. iOS users are a significant market in the USA and control a significant amount of its disposable income.


Photo by Anton from Pexels

However, Facebook’s argument, excluding concerns on how they calculated impact to small business, is disingenuous. Their campaign portrays this update to iOS as the death of personalized ads and the death of the small business. In reality, small businesses can still target advertisements in all data that has been uploaded to Facebook from our phones directly (first party). Small businesses can still use information about us, our home town, our interests, our groups all associated with our Facebook profile. What is changing is Facebook’s ability to track iOS users across multiple applications and browsers on the device itself. It is disingenuous to claim that user generated data, on applications not owned by Facebook, is the property of another unrelated small business.

The landscape of privacy in the digital age is shifting. Apple’s policy of championing individual choice when it comes to sharing personal data, although still notify and consent, is a step in the right direction. It informs users and asks for consent directly, rather than burying it in a long user agreement document. This aligns with the GDPR requirements for lawful consent requests(source). The collection and misuse of user data is a growing concern and continues to be a topic of increased debate. Landmark legislation like CalOPPA and GDPR are increasingly redefining privacy rights of the individual. Instead of embracing these changing landscapes, Facebook chose to stand in opposition of Apple’s app-tracking feature instead of convincing us, the users, why we should allow Facebook to track us across the internet.

This conflict has exposed the real questions consumers will face when iOS is updated. When the request to track pops up as the user launches the Facebook app launches, what will they do? Will they allow tracking and vindicate Facebook’s position, or will they deny the request challenging Facebook’s current business model of tracking people all across the web?

Better yet, what decision will you make?

Google, the Insidious Steward of Memories

Google, the Insidious Steward of Memories
by Laura Treider | March 12, 2021

All those cookies are bad for you
If you are a regular reader of this blog or if you take an interest in the use of data in our modern economy, you are aware that most companies you interact with digitally online or via apps on your phone track as much of your activity as possible. These companies frequently sell your data to brokers, or, while not technically selling it, work with advertisers to monetize your data by tailoring ads to serve you. Proponents of this scheme argue it’s good for consumers and everyone is happier seeing ads tailored for their interests. Privacy advocates are more skeptical and argue you should delete your data to avoid staying in a filter bubble and to prevent any other uses of your data that might not be to your benefit. Choosing to err on the side of privacy, I decided to see what data Google has about me and see how easy it is to delete it.

How to view and download your data
If you’re logged in to a chrome browser, seeing what data Google has about you is fairly simple; the details of how to accomplish it are here. Google has a data and personalization landing page that lets you view your tracked activity and control “personalization settings”, Google’s euphemism for how much they are tracking your activity. From this landing page, I was able to click through to a link to Google Takeout, their cheekily named platform for downloading all the information linked to your account. I chose to download everything possible. It took 10 hours before my download was ready and I received an email with 23 links. 22 of them were 2GB zipped files and a 6GB .mbox mail file containing emails since 2006. (I’m not an email deleter.)

What data does Google have about me?
While I was going through the downloads, I became overwhelmed by the sheer number of folders relating to different products, 40 in all. After I unzipped the 22 files and put them together on my hard drive, I found that Google includes an archive_browser.html file that helps you navigate the file structure. Google isn’t just holding data about my web browsing activity and search history. I was surprised to learn that I had more than 25,000 photos uploaded to Google’s servers. These weren’t from my android phone, either. They were from my camera. At some point in my data life, I must have chosen to upload 25,000 photos to the cloud, but I don’t remember having done that. There were also 48 videos included, the entirety of my YouTube channel I had set up 15 years ago while living in England and sharing memories of my newborn son with family in the US.

Interestingly, not included in my Takeout was my Ad Settings. I had to navigate to those via the online “Data & personalization” hub. I was able to see all the things that Google thinks I’m interested in. Some of them made sense: “Dogs,” “TV Comedies,” “Computers and Electronics,” and “Cooking and Recipes.” Others were a little more perplexing: “Nagoya,” “Legal Education,” and “Parental Status: Not A Parent.” (Sorry, son, I must not Google how to parent you enough.) As an aside, a great ice breaker for virtual dates during this pandemic would be to share each other’s ad personalization data with each other.

In addition to the files I had given Google and the ad settings Google was predicting for me, Google also has 96 MB of location data for me, 12 MB of search history spanning the last 18 months, and another 11 MB of browsing history spanning the past 5 months. Here’s where I got distracted: Google gives you your location data in the form of a JSON file. If you want to turn that JSON file into something viewable in map form and you’re programmatically inclined, I recommend this GitHub repo.

The Google location data rabbit hole
But Google’s online viewer for showing you your location history is engrossing. You land on a map with a bunch of markers representing places you have visited and lived at with some fun badges to encourage you to explore.

I looked at my data from Boise, the previous city I lived in, and randomly clicked dots. Each location came with a flood of memories about when I had gone there and why.

I went back in time to when I knew I had gone on a vacation to Germany. Clicking through the days was like looking at a scrapbook Google assembled for me. One of the days is shown below. I had traveled from our hotel in Munich to the Neuschwanstein and Hohenschwangau castles. My travel path was there, complete with the stop I had made in Trauchgau to air up a leaky tire. Not only was my route there, I could dig through the photos I took on my mobile phone at each stop. What a gift!

The verdict:
According to a recent survey by the US census, only 89% of households has a computer. That’s potentially more than 30 million people who may have smart phones but no computer. So not all users of Google products have the luxury of data Takeout from cloud storage to their home storage. These people will be less likely to delete their data because there’s nowhere else for it to live. By positioning themselves as a simple and generous cloud storage provider for the average citizen, Google has trained us to let them be our data caretakers.
Armed with the knowledge of the extent of the data Google has about me, I was ready to decide whether to wipe the slate clean and remove my digital traces from their servers. Reader, I was too weak. I couldn’t do it. Whoever designed the interface for viewing your Google data online knows exactly what strings to pull to make my nostalgia kick in and want to save this treasure trove of memories. Google doesn’t just know everything about my digital life. It knows where I go and when. I am trussed up like a ham and served to advertising partners, and I go to the feast willingly.

More reading:
https://www.fastcompany.com/90310803/here-are-the-data-brokers-quietly-buying-and-selling-your-personal-information
https://www.eff.org/deeplinks/2020/03/Google-says-it-doesnt-sell-your-data-heres-how-company-shares-monetizes-and
How to see your Google data: https://support.google.com/accounts/answer/162744?hl=en

Apple’s Privacy War: Is it Good or Bad? You decide.

Apple’s Privacy War: Is it Good or Bad? You decide.
by Darian Worley | March 5, 2021

In the ongoing battle to provide more tracking and tools to identify consumers buying habits, Apple has decided to take a different approach and limit the ability of many companies to track your data without your permission. In this multi-billion dollar a year industry, Apple has indicated that it would release what it calls App Tracking Transparency (ATT) across iOS, iPadOS, and TVOS. This feature is expected to launch early spring 2021 to combat the digital surveillance economy.

How do advertisers track me anyway?

People go about their lives every day without realizing just how much data internet giants have collected. When iPhone users use an app to look at the weather, a Facebook post or another app on your iPhone, advertisers use an identifier called Identifier for Advertisers (IDFA) to track the user’s online behavior across multiple apps. This IDFA is a random device identifier assigned by Apple to a user’s device. It is used by companies to determine your location, what websites you visit, and other pertinent info without obtaining access to a user’s personal information. Companies use this information to sell marketing adds to individuals they are able to track, thus monetizing the data that they collected based on your own individual habits. Interestingly, Apple created the IDFA as a result of being sued for sharing user information without limitation via the UDID (Unique Device Identifier).

Why do I need ATT if Apple already has App Privacy Labels in the App Store?
Currently, Apple has what it calls Privacy Nutrition Labels in the Apple App store. These nutrition labels give iPhone users a snippet of what data apps collect and how they use this data. However, these privacy labels are currently based on self-reporting by app developers. There’s no verification by Apple or by any other source to determine whether or not an app is falsely using your data. Users should use caution when reviewing these labels as they may not be able to trust what apple and the privacy label says in the app store. Many apps in the app store that say that they are not sharing your data, but they could be.

Aren’t There Privacy Frameworks and Privacy Laws to Protect Me?
Many users are concerned about their privacy. Privacy frameworks and privacy laws such as The Belmont principles, CALOPPA, CCPA and the FTC were enacted to protect an individualÕs rights. While these privacy frameworks and laws focus on many privacy areas, two core tenants are choice for consumers and greater transparency. Due to the explosion of big data and online apps, many app developers and internet companies have skirted many of these laws and frameworks. In a limited unscientific study where users were specifically asked to read a privacy policy for a specific company, users indicated that the privacy policy is “too long”, and they said that they “assume that the privacy policy has good intentions.”

Potential Negative Implications of Apple’s ATT

In the tech giant war, Facebook took out a full-page ad indicating that Apple’s ATT would harm businesses. In the article “New Apple Privacy Features Will Be Hard on Small Businesses: Curtailing the collection of user data” may mean big spending for small developers, the author does not share any data on how small companies will be impacted other than stating that small business other than Facebook and Google have smaller budgets and they need to gather information to target their users. Further research did not yield any additional insights regarding how small firms would be hurt. The bigger story here is that Facebook has taken a stand against this privacy policy since Facebook stands to lose millions due its billion dollar digital advertising revenue stream.

In summary
We’ve been told that to get better services from internet companies, we need to give up more of our data. While this may be true, consumers should have the right to choose. While one can’t be sure of Apple’s motives to limit user tracking on the iPhone, it is already yielding tangible results since LinkedIn and Google indicated that they would stop collecting IDFA data on iOS. This seems to be a welcomed approach to the wild, wild west of collecting, using and monetizing one’s data without permission. Apple’s policy seems to strike the right balance between giving users choice to determine how their data is used by individual apps. Ultimately, as a user of the iPhone, you get to decide.

Can a Privacy Badge drive consumer confidence on privacy policy?

Can a Privacy Badge drive consumer confidence on privacy policy?
by Dinesh Achuthan | March 5, 2021

As a user/consumer, I always wondered what is in the privacy document or even in terms and conditions document which I blindly scroll through and accept. I talked with a few of my colleagues and friends, and I am not surprised to hear that they also do the same. When the privacy policy or terms and conditions are shown automatically, most of us tend to scroll over and accept it as we know there are no other options other than accepting it if we want to use the application. In the same line of thought, the visual display of sites’ and apps’ security was enhanced a couple of decades ago. We started to see trusted badges, verified by third-party badges, to provide a quick impression on the app/site security. There is a company called TRUSTe who started this idea of providing badges based on privacy policy two decades ago but now it is acquired by a different company and the idea of the badge has changed to drive more e-commerce business rather than to establish the intended privacy-policy trust with the end consumers.

My idea of a privacy badge originated from this idea of security badges, payment partner badges and other types of badges to instill confidence and trust with the end consumers. Why not provide a privacy badge or even terms-and-conditions badge either through a third-party service or via a self-assessment framework for any site/mobile app? Will this in any shape or form help the end consumer? Can a company or industry use this framework to assess themselves to improve their privacy policy? As a user, will it provide me some sense of security to see some badges instead of scrolling through pages and pages of privacy documentation? After thinking through and talking with few of my colleagues I started to think on how to create this privacy self-assessment framework through a methodological thought process and establish a scoring template to self-determine a privacy badge for any privacy policy. If we have such a thing, how would it look like?

I would like to share my approach with limited scope and validate whether it will work before embarking on larger scale. So, I constrained myself to US location and left EU’s GDPR and Germany’s BDSG and any other Asian privacy laws. First, I need to design a privacy assignment framework. What should be there in the framework?

1. I definitely want what an end consumer sees important for his privacy. How can I get this? I started to think about privacy related lawsuits in the past one decade.
2. I definitely want how a corporate or a company thinks about user privacy aligned to their business model. I can get this for any company through privacy policy.
3. Finally, I want something to map consumer thought to corporate thought via what is legally binding, which are the US privacy laws.

To stitch all the above three together, I decided to use the leading three academic privacy frameworks (Solove’s Taxonomy, Mulligan et al.’s Analytic, Nissenbaum Contextual Integrity) and below is the approach I used.

Assessment Framework Design and validation approach
1. Design Privacy categories based on 3 leading academic privacy framework (Privacy Assessment Framework)
2. List US Legal framework in consideration
3. Analyze the top 5-10 Privacy lawsuits and map to privacy categories to which the lawsuit fits.
4. Design Qualtrics privacy lawsuit questionnaire to get user perspective on the lawsuit categories
5. Design Qualtrics privacy baseline questionnaire to get user perspective on top 5-10 good privacy policies and bottom 5-10 bad privacy policies
6. Compute weights for each privacy category with inputs from the Qualtrics survey. Establish privacy score to badge matrix.
7. Compute privacy score with the assessment framework by evaluating 3-5 random privacy policies from the industry. Higher the score better the privacy policy and higher the badge.
8. Validate whether the badge fits with leading privacy experts.

Sample view of privacy score to badge mapping. There are further templates and charts which I omitted to include in this blog to keep it simple.

Assessment Score, Privacy Badge
0-25, Copper
26-40, Bronze
41-60, Silver
61-80, Gold
80-100, Platinum

Sample view of privacy assessment scoring template

Conclusion
I believe this framework will help both the consumers as well as companies. Companies and corporates can use this framework and start self-evaluating their privacy policies and at least get a basic understanding of their score. As a consumer I can get an approximate handle on the privacy policy based on the score or the badge.

 

REFERENCES

https://www.varonis.com/blog/us-privacy-laws/
https://www.trustsignals.com/blog/77-trust-signals-to-increase-your-online-conversion-rate
https://www.trustsignals.com/blog/what-is-a-trust-badge

US Privacy Lawsuits:
● New York Attorney General Letitia James announced her office reached a settlement with Dunkin’ Donuts over the handling of its 2015 data breach of approximately 20,000 customers. The settlement includes $650,000 in penalties, along with new requirements for data security.
● U.S. District Judge Charles Kocoras in Chicago threw out a motion to dismiss IBM’s case over Illinois’ Biometric Information Privacy Act violations regarding the use of facial images from Flickr, Reuters reports.
● Related to IBM, MediaPost reports Amazon and Microsoft are seeking dismissal of Illinois’ BIPA cases of their own regarding their use of the same images held by IBM.
● Facebook reaches a $650 Million settlement for facial recognition technology used to tag photos by storing biometric data (digital scans of users’ faces) without notice or consent violating Illinois’s BIPA.
● FTC and New York Attorney General fine Google and Youtube $170 Million for collecting personal information of children (persistent identifiers) violating COPPA.
● https://github.com/FortAwesome/Font-Awesome/issues/13833 (badge image)
● (As claimed at https://www.trustsignals.com/blog/the-history-of-the-truste-seal-and-why-it-still-has-value Companies who display the TRUSTe Certified Privacy seal have demonstrated that their privacy policies and practices meet the TRUSTe Enterprise Privacy & Data Governance Practices Assessment Criteria. It’s fair to say that TRUSTe is no longer the preeminent trustmark to website visitors. Many have never heard of the organization or know of its history, and many other entities and regulations have stepped forward in the privacy and security space)

DNA Databases: The Line Between Personal Privacy and Public Safety

DNA Databases: The Line between Personal Privacy and Public Safety
by Brittney Van Hese | March 5, 2021

Recently customers of popular ancestry companies, such as GEDmatch, learned that the DNA data they had submitted to learn about their family was secretly being searched by police to solve crimes. While the contribution made to putting away some of the vilest criminals – like the Golden State Killer – has been touted by law enforcement as a win for society, the revelation of policing searching genealogy profiles without user knowledge has raised questions about the line between consumer privacy and public safety.


Image Source

Genealogy uses the DNA associated with ancestral linage to establish a family connection between a perpetrator’s sample and the uploader. Then, manually, an analyst builds down a family tree from that connection using public records such as birth certificates, death records, and marriage licenses. The family tree is then used to generate a focused suspect pool, at which point investigative police work takes over to build a remaining case sufficient for arrest.


Image Source

For the Golden State Killer, police obtained the family tree data by acting as a normal user uploading DNA to find a relative, not identifying themselves as police. These approaches have shone a light on the previously unconsidered legal and ethical concern of police access to consumer data for the public good. Up until the Golden State Killer case, GEDmatch was not even aware police were using their services, users were unaware they were cooperating with police, and no regulation existed on the subject.

Now that the discussions have started, two sides of the argument have naturally emerged. Those in law enforcement who believe in the beneficence of their work see little harm in the practice. It is solving terrible crimes which would otherwise be left to turn cold. Additionally, legal non-profits like DNA Doe Project, access genealogy resources to identify John and Jane Doe victims – bringing closure to families.

On the other side of the argument, users voice concerns about consent, constitutionality, and police misconduct. Firstly, users uploading their profiles were not volunteering to be included in a suspect database and their consent was never given for their data to be searched. Additionally, these searches were conducted without warrants, which is in conflict with recent supreme court precedent regarding obtaining public database information. Lastly, there are members – like Michael Usry – who were targeted as a suspect because their profile is closely related to the culprit’s family tree. Opening the door for police misconduct such as bias efforts being made to confirm the genealogy results.

In response to the debate, DNA and genealogy companies have altered their privacy policies to attempt pleasing both sides by creating an opt-in policy for users. By opting in, users are agreeing to add their profiles to a database that is available to police. However, the glaring concern that arises from this approach is that the opt in does not actually impact the individual who is sharing the data – as it is most likely they would share the data with the knowledge they have not committed any crimes. The problem is that the person choosing to exercise their personal freedom to opt in and share their data with police are doing so on the behalf of distant relatives who may have committed these crimes. This presents a not only a moral dilemma with implicating others’ privacy but also applies ethical pressure in public safety, making this a particularly difficult situation.

Luckily, there is a path forward through legislation. First and foremost, the process still relies on proper due process of the criminal justice system; a judge must grant a warrant to conduct searches on the databases have users consent. Warrants can only be requested for this purpose only if the case is a violent personal crime, such as homicide or rape, and has exhausted all other investigative resources. Most importantly, the scope of genealogy data is still limited by current technology to only point investigators in a general direction, from which investigators must still rely on using evidence-based crime solving to make an arrest. For now, Federal regulation of genealogy data usage in crime fighting strikes a sufficient balance in privacy and policing; but this legislation will need to be closely monitored as genealogy technology advances.

References

Akpan, Nsikan. “Genetic Genealogy Can Help Solve Cold Cases. It Can Also Accuse the Wrong Person.” PBS, Public Broadcasting Service, 7 Nov. 2019, www.pbs.org/newshour/science/genetic-genealogy-can-help-solve-cold-cases-it-can-also-accuse-the-wrong-person.

“DNA Databases Are Boon to Police But Menace to Privacy, Critics Say.” DNA Databases Are Boon to Police But Menace to Privacy Critics Say | The Pew Charitable Trusts, www.pewtrusts.org/en/research-and-analysis/blogs/stateline/2020/02/20/dna-databases-are-boon-to-police-but-menace-to-privacy-critics-say.

“DNA Doe Project.” DNA Doe Project Cases, 5 Mar. 2021, dnadoeproject.org/.

Payne, Kate. “Genealogy Websites Help To Solve Crimes, Raise Questions About Ethics.” NPR, NPR, 6 Mar. 2020, www.npr.org/2020/03/06/812789112/genealogy-websites-help-to-solve-crimes-raise-questions-about-ethics.

Schuppe, Jon. “Police Were Cracking Cold Cases with a DNA Website. Then the Fine Print Changed.” NBCNews.com, NBCUniversal News Group, 29 Oct. 2019, www.nbcnews.com/news/us-news/police-were-cracking-cold-cases-dna-website-then-fine-print-n1070901.

Zhang, Sarah. “The Messy Consequences of the Golden State Killer Case.” The Atlantic, Atlantic Media Company, 2 Oct. 2019, www.theatlantic.com/science/archive/2019/10/genetic-genealogy-dna-database-criminal-investigations/599005/.