An Ethical Framework for Posthumous Medical Data

An Ethical Framework for Posthumous Medical Data
By Anonymous | May 28, 2021

Coinciding with drastic improvements in technology and personal data collection over the previous two decades, there is an ever-growing accumulation of digital health data that individuals have access to. From electronic health reports continuously updated by your primary care provider, to direct to consumer products such as genetic sequencing tests, and even personal fitness trackers that follow an individual in real-time through their daily lives, all this information becomes a part of your health data footprint. What happens to all this information after someone passes away? Healthcare research is becoming increasingly interested in leveraging big data analysis methods on this abundance of well-documented medical data. This question brings to light the ethical concerns involving the use of posthumous patient information.

There has been much consideration regarding the ethical use of personal information from data collected while an individual is still alive. Much of this conversation revolves around maintaining consent and transparency between the parties involved, however, this becomes a different conversation when the subjects of interest are deceased. How do we proceed when we are unable to verify consent of the individual? Well the good news is that the medical community has experience regarding this issue: the physical donation of human bodies to medicine and science. There are legal and ethical frameworks in place that outline the process for donation and use of bodies and organs to medicine and research. However, part of the issues with using digital health data and electronic health records (EHRs) are the lack of rules and regulations in place, which makes it difficult for researchers to obtain large quantities of private health information for use in observational clinical research.

In the United States, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) regulates covered entities (i.e. health care providers) and business associates with the necessary barriers and safeguards to maintain the confidentiality of an individual’s private health information (PHI). The HIPAA Privacy Rule protects the private health information while an individual is alive and for up to 50 years after they pass away. Additionally, it outlines and describes the ways that covered entities can disclose or use PHI, including for research purposes. PHI can be used for research if the informed consent of the subject is obtained, which can be difficult to secure for posthumous data. More commonly, the existing legal policies require an institutional review board (IRB) to provide ethical approval for every research analysis that intends to be published. This current framework requires separate IRB approval for each individual research analysis and specifically prevents broad research topics, such as the analysis methods consistent with “big data”.

Research institutions can circumvent the IRB approval process using two approaches. The first approach requires the removal of PHI and identification of subjects prior to research. This alters the electronic health records to the extent that the de-identified data is no longer considered “human subject” research, which is why they do not need the approval of the IRB. This can also become dangerous because of the distortion of the data during the de-identification process, which may require the alteration of important clinical data such as demographics and diagnoses. The second approach is to essentially wait until the 50-year protected period expires, and to use the electronic health reports of patients that have died over 50-years ago. In the first approach, we compromise the integrity and accuracy of the research conducted by classifying the research as non-human subjects; in the second approach, while legal, we are compromising on ethical research principles and practices. However, we can hope to bridge the gap between these two approaches by establishing the data infrastructure necessary to conduct “big data” analysis on electronic health reports, as well as establishing an ethical code of conduct for use of posthumous patient data.

In 2019, a group of ethicists and lawyers at the Oxford Internet Institute came together to establish the first ethical code for posthumous medical data donation. This ethical code is based on five foundational principles that aim to balance the key risks for using personal health data with the promotion of the common good.

  1. Human dignity and respect for persons
  2. Promotion of the common good
  3. The right to Citizen Science
  4. Quality and good data governance
  5. Transparency, accountability, and integrity

Furthermore, they outline ethical conditions for the collection of the posthumous medical data donation as well as ethical practices regarding the use of the data in a research setting. They hope that their work serves to encourage the availability of personal medical data for scientific research in a safe and ethical setting.

In addition to establishing an ethical code of conduct for EHR use post mortem, researchers at the US National Institutes of Health proposed the creation of a formal informatics infrastructure for EHR data. While operating within the current legal regulations and an ethical code of conduct, they describe the creation of a deceased subject integrated data repository as an effective tool for observational clinical research. Looking forward, it is with increasing interest and urgency that we build the necessary tools and maintain these guiding principles for the use of posthumous medical data. With the advancing technologies of both “big data” as well as digital healthcare reporting tools, it is imperative that we establish the foundation of this area of medical research so that we can advance the quality of research conducted while protecting the privacy and respect for those in our society.

References

  1. Krutzinna J., Taddeo M., Floridi L. (2019) An Ethical Code for Posthumous Medical Data Donation. In: Krutzinna J., Floridi L. (eds) The Ethics of Medical Data Donation. Philosophical Studies Series, vol 137. Springer, Cham. https://doi.org/10.1007/978-3-030-04363-6_12
  2. Huser V, Cimino JJ. Don’t take your EHR to heaven, donate it to science: legal and research policies for EHR post mortem. J Am Med Inform Assoc. 2014;21(1):8-12. doi:10.1136/amiajnl-2013-002061
  3. Health Information of Deceased Individuals, https://www.hhs.gov/hipaa/for-professionals/privacy/guidance/health-information-of-deceased-individuals/index.html
  4. Use of Electronic Patient Data in Research, https://journalofethics.ama-assn.org/article/use-electronic-patient-data-research/2011-03
  5. Data donation after death, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4718407/

Images

  1. https://healthtechinsider.com/2018/10/10/report-most-docs-think-ehrs-harm-relationships/
  2. https://compliancy-group.com/am-i-hipaa-covered-entity-chart/
  3. https://catalyst.nejm.org/doi/full/10.1056/CAT.18.0290

Is WeChat technologically ethical?

Is WeChat technologically ethical?
By Anonymous | May 28, 2021

When it comes to popular social media platforms, in the United States, people often refer to Facebook. In China, WeChat, the so-called Chinese version of Facebook, is the application that not only dominates the Chinese market, but also grows to have an important impact in the global market. In fact, as of 2019, it is estimated that WeChat’s monthly active users exceed one billion (Daily Active Users).

However, while WeChat and Facebook are similar in terms of social functionality, WeChat is quite different intrinsically. Claiming WeChat as simply the communication application like Facebook is actually an understatement. According to China Business Registration, WeChat is an official internet-based bank and mainly serves as the intermediary of money. If people are traveling in China, they will be very much surprised how everyone pays for everything by simply scanning the WeChat QR code. The app is deeply embedded into all aspects of life in China.

Then it caught my attention: Is WeChat technologically ethical? Although the emergence of WeChat seems to make the life of Chinese people efficient and convenient, the enormous amount of data such as financial transactions, health records, and communication footprint, as WeChat bears, make ethical considerations on this social application an essential topic.

Often time, users of WeChat find the application so convenient that the amount of personal data recorded can easily be ignored. From the official identification required to register as the user, messages conversation with fellows, debit/credit cards linked, to important documents such as contracts, it is imaginable how unprecedentedly powerful the database WeChat is entitled, which does not yet include third-party data collected by WeChat. With that being said, if such “powerful” amount of personal data is not well-protected, it can easily be used to turn against users who put their trust in the application.

There has been a hotly-debated topic in whether the Chinese government should take over Tencent’s power in protection of data. On the one hand, some highlight the pros of WeChat power being transferred to the Chinese government because it helps regulate WeChat as the Chinese social platform monopoly and protect national security. Indeed, it is a sign of danger when a particular profit-oriented company grows too powerful. In this case, WeChat holds onto personal data of large group of citizens and national security could potentially become an issue. Needless to say, China is one of the most populous country in the world. Evidently, Facebook’s Data Breach in 2019 is a vivid warning to the Chinese government. On the other hand, some people are afraid of the increasing power over its citizens if the Chinese government does take over WeChat. Indeed, in one way or another, WeChat contents such as messages are censored to ensure security and the Tencent WeChat team is backed by local police stations in case of emergency. However, noteworthily, regarding government requests to access encrypted messages, Tencent is the only company that denies the possibility publicly in the big data world.

In conclusion, while WeChat is smart and convenient, people do not know how complicated its black box is. It is about the protruding question of how “private” people’s data are. The data of WeChat users involved is so large that data security and privacy will always be an issue to consider. As far as WeChat in the international setting, regulations that comply with legal systems of both China and other countries are needed to account for their potential political, economical and social differences. The solutions could potentially be applied to any other massive and data-heavy technology companies in an effort to address data security and privacy. We all would not want a society similar to that described in George Orwell’s renowned 1984: “always eyes watching you and the voice enveloping you” and “nothing was your own except for few cubic centimeters in the skull” (Orwell).

References

Targeted Advertisements

Targeted Advertisements
By Anonymous | May 28, 2021

Over the years as smartphones have become smarter and technology has become more prevalent, the fear of being listened to has risen significantly. Whether people believe they are being listened to through their phones or through the smart technology in their house, there exists a widespread notion of some third-party listening in. One of the larger supporting arguments of being listened to is through targeted advertisements. These are ads uniquely directed towards users. It is no secret that targeted advertisements have become very popular, scaring many about their privacy. Yet, people still don’t truly understand how they come about, so I am here to tell you that it is very possible that your phone is listening to you or collecting your information! The intention of this post is to inform people of why they “randomly” get targeted advertisements and how they could take some action to maintain their privacy.

Why do I get targeted advertisements?
When logging into any social media, it is common to see relevant advertisements, making phone users question how such a thing could be possible. Most individuals who have smartphones typically have a wide variety of applications, with some being social media and some being random third-party made apps. Many of these applications and even websites visited on a browser typically provide a few questions to the user, such as “Do you accept cookies” or “Do you grant this application microphone access”. Without a second thought, we are often quick to accept such terms. Yet, they are unaware of the fact that such permissions are the reason for their targeted advertisements on social media. When accepting cookies from a website, we essentially provide some data to the companies behind the websites, such as our location. When providing microphone access to certain smartphone applications, research has shown that certain companies harvest a lot of data by simply eavesdropping through your microphone and providing ads. Sometimes, we accidentally provide personal information about ourselves through online services. Additionally, it isn’t too difficult for companies to purchase data from other third-party data and research companies. Scary, huh? You might be wondering what the point of targeted advertisements is and what companies would do with your data. Well, it’s like any other company, website, or application. By inhaling user data, these companies are able to create a greater chance of success with targeted advertisements, allowing them to obtain a potential lead, as in yourself. Overall, there are several ways that companies are able to collect your data: cookies, social media tracking, purchasing data from third-party data companies, and asking you personally. There are all sorts of opportunities for your data to get out there.

How do I maintain my privacy?
There are several precautions you can take on the note of privacy & protecting your data. I believe it is a good idea for everyone to go through their smart phone’s settings and take a few moments to go through the settings for each application downloaded. Make sure all the permissions and app settings are just how you want them and ensure that you aren’t providing any unnecessary permissions. For instance, certain gaming applications on the phone may not need microphones, but could definitely ask for access in the settings. Additionally on the web, be mindful of the websites you visit and cookies you allow them to store. On a regular basis, it is a good idea to clear your device’s cookies & cache to protect your privacy. These days, there is a lot of our data out there and many opportunities for companies to obtain that data through loopholes. It is important for us to take the necessary measures in order to protect our data and maintain our privacy.

References

  • Forbes – Of Course Your Phone Is Listening To You
  • Spiralytics – Mobile Ads: Can Your Phone Hear Your Conversations?
  • Medium – Why Do We See Ads For Things We Just Talked About?

A Chinese Indonesian American’s Call for Data Disaggregation of Asian American and Pacific Islands

A Chinese Indonesian American’s Call for Data Disaggregation of Asian American and Pacific Islands
By Joshua Lin | May 28, 2021

In light of Asian Pacific American Heritage month and the passage of the COVID-19 Hate Crimes Act, I felt both proud but also weary. What better time to pass a bill signaling the government’s efforts to stand by the Asian American Pacific Islander (AAPI) community than a month celebrating the AAPI community? And yet even though the legislation was a step in the direction of progress and reconciliation, I still didn’t feel wholly satisfied, and truthfully speaking, I had my doubts. Having grown up as a Chinese Indonesian American myself, I knew deep down through my lived experience of microaggressions and blatant acts of racism that a single act passed in a month of AAPI celebration was only the beginning of a process of addressing the structurally embedded racism towards the AAPI community.

Before I continue, I want to note that I will use the terms “Asian American and Pacific Islander,” Asian Pacific Islander American, and Asian Pacific American interchangeably. Furthermore, I do want to acknowledge that my experience cannot speak for all the lived experiences of my fellow Asian Pacific Americans. I also want to address the privilege I have in being able to share my thoughts here, as given the current political climate, some members of the communities have experienced acts of severe violence or have even lost their lives in recent events due to racism against the AAPI community.

The history of Asian Americans like other racial and ethnic minorities in the US is speckled by racism. There are countless examples of events, institutions, laws, etc. that reflect this turbulent history, but to ask how racism towards the AAPI community has become so embedded into the institutions and structures holding up our country to this day is a more worthwhile exercise. One of the most impactful means of institutionalizing AAPI racism has arguably been classification systems.

Throughout the history of the US census, the AAPI community have seen themselves continuously made visible and then invisible. As Kertzer and Arel (2002) recount, there was constant reshuffling of racial and ethnic categories for the AAPI community. For instance, Koreans appeared and disappeared from the census in the first half of the 20th century. While the current census now attempts to list as many categories as possible and gives the respondent the option to declare their ethnicity if not listed, research that comes out of the Census as well as similar population/community surveys tend to neglect the diversity of the AAPI community in favor of efficiency and comparability.

Current research often includes the aggregation of the AAPI community with each other or with other groups. Government economic reports might list Asian/Asian Americans separate from Pacific Islanders, but the very act of solely reporting metrics for the Asian American community as a whole obscures the large heterogeneity across each ethnic group. In fact, Li Zhou from Vox states that Asian Americans have the largest income gap within a single racial group. Meanwhile, taking the USDA food security annual report, the AAPI community has no representation. They are placed in the “Other” category along with American Indian/Alaskan Native, which certainly has harmful implications to all communities.

Growing up in an Indonesian community, I was surrounded by families barely making ends meet, and yet these economic reports claimed Asians fared even better than the rest of Americans. Their invisibility indirectly reduced their knowledge of food benefit programs, because the government and non-profits were not trying to reach these supposedly successful Asians. From this example, we can see that while this categorization facilitates the comparison across racial groups, the decision to aggregate these groups by researchers effectively removes these groups from the radar and reinforces their marginalization from American society.

In this way, official government methods of classification have shaped reality to these categories, creating seemingly homogenous racial and ethnic groupings that never existed until the rise of the modern nation-state and the decennial census. As these large-scale classification efforts shape reality, we can see how institutions continue to cement these categories. Ellen Wu writes how the government created the model minority myth to downplay the Civil Rights Movement in the 1960s, citing the successful assimilation of the AAPI community into American society and erasing the history of the Chinese Exclusion Act, the Gentleman’s Agreement with Japan, and Japanese internment (and in all these events, the census also had a role of controlling the AAPI population to oppress them).

But who even had assimilated? All 50+ ethnic groups of the AAPI community? With the creation of the model minority came the creation of the Asian stereotype. The Asian stereotype came in the appearance of a fair skin, small-eyed, black haired individual with a small frame, a meek demeanor, and high intelligence. This imagery had roots in racist media productions that fetishized and sexualized AAPI women and colonial era propaganda portraying AAPI men as emasculated (or even wicked) to facilitate imperialist agendas. Of course, arguably a majority of the AAPI community does not fit this stereotype, but all members of the AAPI community have been expected to fit into this stereotype. As a result, data aggregation through these classification systems serves as a modern method of controlling the AAPI community, continuing the history of racism experienced by us.

And so here I am as a Chinese Indonesian American calling for the continued effort to make all of us Asian Pacific Islander Americans seen as unique. While that push has begun in different parts of society, we are still far from complete visibility.

References

Sexual Assault Data & Rideshare Apps

Sexual Assault Data & Rideshare Apps
By Hannah Choi | May 28, 2021

“Safety should never be proprietary. You should be safe no matter what ridesharing platform you choose,” – Tony West, Uber’s chief legal officer

With the convenience of ridesharing apps, travel has gotten significantly easier for many. However, along with those benefits come the obvious risk of getting into a stranger’s vehicle. The safety of ridesharing apps has come under scrutiny during the past few years, with Uber getting its license revoked twice in London since 2017 for putting passengers at risk. When put under pressure by the public to do better, some of these companies have started making steps towards safety improvement using data.

Data Privacy for Sexual Assault Survivors & Uber
In the effort to create a standard for the company’s safety protocols and contextualize the statistics in their findings, Uber released their first safety report including data on car crashes, deaths that occurred during rides, and sexual assault cases. The report revealed that 3,045 cases of sexual assault had been reported in the US while utilizing their rideshare services in 2018. Shortly after the release of this information, the CPUC (California Public Utilities Commission) requested further data from Uber regarding the victims and witnesses to the alleged assaults, names and contact information of the authors of the safety report, and the individuals at Uber who the sexual assault incidents were reported to. Uber refused to comply, on the basis that the release of data on victims of sexual assault without their consent was a violation of privacy and that the data would be handled by “untrained individuals”. Despite the CPUC addressing privacy concerns by confirming that the data would remain under seal and inaccessible to the public, Uber’s continuous refusal to provide the data led to a $59 million fine.

Implications for Sexual Assault Survivors
The nonprofit, RAINN, (Rape, Abuse & Incest National Network) and other victims’ rights groups were in support of Uber’s decision to withold sensitive information, as sharing identifying data and details of the incident, especially without consent, could lead to retraumatization of the victims. Many sexual assault victims choose to not report to the police, and by sharing this data with the CPUC, directly goes against survivors’ wishes to share sensitive information with the state.

Uber and Lyft Join Together to Share Data on Banned Drivers
In March of 2021, Uber and Lyft announced that they would be sharing data between companies on drivers who had been banned for sexual assault and other severe incidents from their platforms via their Industry Sharing Safety Program. To protect individual privacy, both companies will be processing incident reports via a third party background-screening company. By sharing data on drivers who have been flagged for sexual assault, Uber and Lyft hope to improve safety and protect customers by preventing drivers from avoiding bans from one app by hopping onto the other. The database will also not include personally identifiable information for the passengers involved in reported incidents.

Future Steps
While Uber and Lyft have kickstarted the Industry Sharing Safety Program, they have stated that other ridesharing companies will be allowed to share the database in the future. This step towards more transparency in sharing data in a secure manner, holding accountability, andenhancing safety measures for customers will hopefully encourage other companies to follow suit.

References

The Ethical Considerations of Instagram for Kids

The Ethical Considerations of Instagram for Kids
By Anonymous | May 28, 2021

“Current Instagram policy forbids children under the age of 13 from using the service” (Mac, Silverman 2021). That is all about to change as the parent company of Instagram, Facebook, has added the creation of Instagram for kids onto their priority objectives.

This piece will describe and analyze the impact of the new social media platform, as well as provide possible ethical guidelines and expose shortcomings from the perspective of the Belmont Report.

Currently, extensive details about Facebook’s new project aren’t available, as little has been officially declared by Facebook. However, in a short interview with Buzzfeed, Adam Mosseri, the head of Instagram, said “part of the solution is to create a version of Instagram for young people or kids where parents have transparency or control.”

In an interview with Bloomberg Technology, Naomi Nix from Bloomberg News drew inferences from Facebook Messenger Kids to make predictions about possible features of Instagram for kids. She expects that parents will have greater parental control and access to information such as who their children are messaging with, what videos and images they are sending and receiving, and even be able to control what time of day their child can access the application. However, she also warns there has been considerable pushback from parents due to privacy concerns and whether it is healthy to allow such young users access to social media.

Many parents and adults hold reasonable skepticism given the worrying track record of Facebook and its history such as the Cambridge Analytica scandal in 2018 (Criddle 2020). The fact that Facebook tracks users and capitalizes off petabytes of data users produce is no longer a secret. Therefore, to lessen the ethical impact of Instagram Kids, I will present in this post what I think are sensible steps that Facebook could take through the lens of the Belmont Report, a staple document that guidelines ethical human research.

Respect for Persons:

The collection of data about people needs to be respectful. One needs to be even more careful when dealing with the children’s data. Due to their lack of experience, children are considered to have diminished autonomy, and are thus entitled to greater protection. Simply getting an infant to accept Instagram’s terms and conditions that they may partially understand is out of the question. It would also be unreasonable to expect a toddler who willingly accepts a company’s terms to also accept it when they are older.

A possible workaround to these issues could be to require a parent’s confirmation for their child to use Instagram for kids. As children grow up, they should also have the freedom to revoke or opt-out of their contracts with Facebook. While existing data should be deleted in their entirety: a child’s past behaviors or actions should not dictate their fate in the rest of their academic, professional, and social lives.

Beneficence:

The purpose of platforms like Instagram Kids is supposedly to protect them from the vast content available online. However, these platforms must also be considerate towards protecting the digital privacy of their users. As Facebook will most certainly benefit from the data generated by its increased user audience it must also do its role in mitigating the harms caused onto those same users.

The digital footprint left behind should be ephemeral and not permanent to protect users from being subject to profiling and tracking from such an early age. Children should not be the target for advertisements and should be protected from those looking to benefit from the innocence of youth users.

Justice:

Facebook is not simply aiming their new product to children, but it is also marginalizing and targeting subgroups within that population of users. Facebook will likely receive more data from children who are more vulnerable and more susceptible to being hooked and effected by the addictive nature of Instagram. This could cause those students to spend less time focusing on their schoolwork and spend more time endlessly scrolling through their feeds.

Instagram for kids will likely also increase popularity contests amongst students and spread gossip easier within schools. These issues often target those who are more self-conscious and vulnerable to the judgement of their peers and friends. These concerns could also be corelated with increases in bullying and cyber bullying cases, all of which seriously play with students’ self-esteem and mental health.

Unfortunately, from a justice standpoint, Facebook’s new product has the potential to wreak havoc for student lives. The line between school, private and digital lives will become less clear. Doing so can cause several problems for children, and especially for those among them who are more psychologically vulnerable.

References

“Facebook Building a Version of Instagram for Kids Under 13.” YouTube, 19 Mar. 2021, https://www.youtube.com/watch?v=G2w_V4HXl4k

“Facebook Sued over Cambridge Analytica Data Scandal.” BBC News, 28 Oct. 2020, https://www.bbc.com/news/technology-54722362

Heilweil, Rebecca. “Seems like Everyone Hates Instagram for Kids.” Vox, 15 Apr. 2021, https://www.vox.com/recode/22385570/instagram-for-kids-youtube-facebook-messenger

Cryptocurrencies: A forgotten ethical question

Cryptocurrencies: A forgotten ethical question
By Daniel Lampert | May 28, 2021

On May 22, 2010 the first crypto currency transaction of all time occurred. Laszlo Hanyecz bought two pizzas with 10,000 Bitcoins, today valued at over $600 million dollars. Bitcoin, the first decentralized cryptocurrency, appeared only twelve years ago and at first was considered a potential substitute for traditional, centralized currencies (topics et al., n.d.). Over the years, the use of Bitcoin and subsequent crypto currencies has shifted dramatically. They have shifted from a currency alternative to a speculative investment that fluctuates dramatically in value from day to day. The use and investment in crypto currencies presents both substantial ethical and legal challenges.

Due to their speculatory nature, crypto currencies expose investors to substantially higher risk than do traditional investments. Crypto currencies are known as a fiat currency. This means that their price is purely determined by market demand. This is the case for many currencies including the US dollar, but in the case of crypto currencies, their price is determined solely by people changing their minds about what it is worth (Turchi, n.d.). For centralized currencies, price variation varies in much more complicated ways than simply beliefs of worth. For this reason, crypto currencies are subject to much more instability than both other currencies and other more traditional investments.

Another major challenge with crypto currencies is accessibility. Many people simply do not have access to sufficient resources to open a crypto wallet. This situation means that only a certain class of people stand to benefit from potential profits. The accessibility issue is largely exacerbated by the fact that crypto currencies are not traded on traditional exchanges, meaning interested people need specialized accounts to buy the currency.

The technology that crypto currencies rely on is having a major impact on global warming with many sources indicating that the crypto currencies use more electricity than the entire country of Argentina. As a matter of fact, crypto currencies use more energy per transaction than any other currency (Sorkin, 2021). Research suggests that one crypto currency transaction is roughly equivalent to 735,000 Visa transactions or 55,280 hours of Youtube (Sorkin, 2021). Crypto currency is so energy intensive because the technology that powers it requires substantial computing power, known as crypto mining. The mining process requires that computers solve highly complicated mathematical equations that require a large quantity of electricity.  Considering the climate crisis we are currently facing, moving towards a currency system that is extremely energy intensive is highly concerning. Many individuals who work with crypto currencies have stated that the carbon footprint of their products is determined by how clean the energy grid is where the mining took place (Sorkin, 2021). Even if this is accurate, the impact of crypto currencies would likely still be high since many countries and geographical regions still primarily use fossil fuels for energy generation.

Under a legal framework, crypto currencies do not meet the minimum requirement to be considered a valid currency. Legal currencies must have legal tender status, central management, and a physical currency (Bal, 2015). Crypto currencies do not meet any of these criteria. One major issue that arises from this is the question of taxation. Income from crypto currencies is generally speaking taxable, however, it is often quite complicated. The first issue arises from the fact that most crypto currency investors are unaware of the laws governing taxation on revenue from crypto currencies (Bal, 2015). Moreover, many people invest in crypto currencies explicitly to avoid taxes and use them as a pseudo tax haven (Bal, 2015). Another major legal issue arising with crypto currencies is that they facilitate black market purchases, in particular, on the dark web (Popper, 2020). Crypto currencies allow people who deal in the translation of prohibited products or services to do it in a much more anonymous way. In many ways this facilitates illegal transactions.

Perhaps the solution to most of crypto currencies problems could be solved by making the currency less allusive, more legitimate, and lastly, more regulated. Currently the process of owning crypto is quite complicated. One can buy crypto on popular exchanges such as Paypal or Robinhood, but in those cases the individual does not actually own the currency. Improving access to crypto wallets would allow more people access to these markets and likely have the secondary effect of stabilizing their fluctuations. Another move to improve legitimacy would be to increase the markets where crypto currencies are a legitimate way to pay. Over time, these changes would likely reduce the negative impact.

References

Bal, A. (2015). Chapter 14—How to Tax Bitcoin? In D. Lee Kuo Chuen (Ed.), Handbook of Digital Currency (pp. 267–282). Academic Press. https://doi.org/10.1016/B978-0-12-802117-0.00014-X

Popper, N. (2020, January 28). Bitcoin Has Lost Steam. But Criminals Still Love It. The New York Times. https://www.nytimes.com/2020/01/28/technology/bitcoin-black-market.html

Sorkin, A. R. (2021, March 9). Bitcoin’s Climate Problem. The New York Times. https://www.nytimes.com/2021/03/09/business/dealbook/bitcoin-climate-change.html

topics, F. B. F. L. F. T. J. F. is an experienced writer on a wide range of business news, Investopedia, his work has been featured on, topics, T. N. Y. T. among others L. about our editorial policies J. F. F. B. F. L. F. T. J. F. is an experienced writer on a wide range of business news, Investopedia, his work has been featured on, & policies, T. N. Y. T. among others L. about our editorial. (n.d.). Bitcoin. Investopedia. Retrieved May 24, 2021, from https://www.investopedia.com/terms/b/bitcoin.asp

Turchi, J. (n.d.). Rutgers professor raises doubts on ethics of bitcoin, cryptocurrency. The Daily Targum. Retrieved May 25, 2021, from https://dailytargum.com//article/2019/02/rutgers-professor-raises-doubts-on-ethics-of-bitcoin-cryptocurrency

The Road to Biological Immortality Opens Pandora’s Box

The Road to Biological Immortality Opens Pandora’s Box
By Wesley Kwong | May 28, 2021

Humans seeking immortality is deeply rooted in history and culture. Now, scientists are doing the same. A paradigm shift has occurred in science where all the diseases that one would associate with old age, like heart disease and cancer, are merely symptoms of the true chronic disease known as aging. This new framework also implies that aging is not natural and thus can be treatable [1]. A recent discovery pushes that vision one step closer to reality. But it also opens some possible ethical concerns.

The Horvath clock, developed in 2013 by Professor Steve Horvath of UCLA, is an algorithm that accurately predicts the biological age of a person with a median error estimate of 3.6 years. This algorithm uses 353 epigenetic markers on DNA [2]. Now, scientists have a quantitative way to definitively measure the effectiveness of potential anti aging medication and treatments [3]. The biotechnology industry quickly jumped on this and now companies, such as MyDNAge and Chronomics, started to sell kits to consumers who wanted to know how their biological age compares to their actual age. While the ethical issues surrounding traditional DNA testing (patient privacy, informed consent, and appropriate usage of data) still apply here, there are additional ethical and legal issues that governments and society must grapple with [4].

Objectivity as a Biological Metric

While the Horvath clock builds upon previous aging prediction research and provides a very accurate measurement of biological age regardless of the DNA source, more research must be done to determine what factors influence it to ensure that the algorithm is an unbiased, objective metric for all population subgroups. Currently, studies are being conducted on its validity in regards to human stages of development, biological sex, race, pregnancy, diseases, genetic factors, and lifestyle factors. Further research is also done on the optimal combination of DNA methylation factors to reduce the error estimate and improve the robustness of the estimate [5].

Legal Age

The addition of biological age brings up all sorts of challenges when dealing with boundaries established by age. It reignites the age-old discussion on whether juveniles should be tried in the justice system as adults [5]. If an 18 year old commits a crime but he is biologically and thus mentally a 17 years old, would the individual be tried as an adult or a minor? What happens in the reverse case? Another issue is retirement. The retirement age is generally the expected age where people stop working since they physically or mentally cannot perform at the expected level. But with this technology, does a 70 year old person get retirement benefits even though they are biologically 10 year younger?

Age Discrimination

Age discrimination is the unfair treatment of an employee during a decision to hire, promote, raise benefits, or layoff based on their age. This is prevented through the US through the Age Discrimination in Employment Act of 1967 for workers 40 years and older. However, age discrimination is still a significant problem as it makes up more than 1 out of every 5 discrimination cases reported to the US Equal Employment Opportunity Commission [6]. Discrimination based on a person’s biological age should be covered by the Genetic Information Nondiscrimination Act of 2008 which prevents the usage of DNA tests for employment decisions [7]. But given that age discrimination is still rampant in the workplace, the addition of a biological age metric might exacerbate the discrimination. Furthermore, much like the current social stigma against obesity, there may be a stigma against people who are biologically older than their actual age.

In conclusion, the discovery of the Horvath clock will have profound impacts on humanity. Not only does it pay the way towards a healthier lifestyle and potentially eternal youth, it changes our relationship with death. Perhaps, people will seek a more fulfilling life when death becomes a solid deadline instead of some nebulous ending that will happen sometime in the future. But this discovery also poses some novel ethical and legal concerns that don’t appear in a traditional DNA test. We, as a society, need to grapple with these concerns to ensure that the Belmont principles are upheld when this revolutionary technology matures and leaves Pandora’s box.

References
1. “A Conceptual Shift to (Finally) Seeing Aging as the Cause of Age-Related Disease.” Fight Aging!, 15 Jan. 2021, www.fightaging.org/archives/2021/01/a-conceptual-shift-to-finally-seeing-aging-as-the-cause-of-age-related-disease/.
2. “DNA Methylation Age and the Epigenetic Clock.” DNA Methylation Age Calculator, horvath.genetics.ucla.edu/html/dnamage/.
3. Yacoubi, Mehdi. “How Healthy Are You? The New Horvath Clock Will Tell You.” Medium, Vital Health, 7 Oct. 2020, medium.com/lifetizr/this-will-change-how-we-treat-aging-3c0e1d58900d.
4. Charles Dupras Postdoctoral Fellow, et al. “New DNA Test That Reveals a Child’s True Age Has Promise, but Ethical Pitfalls.” The Conversation, 12 Oct. 2020, theconversation.com/new-dna-test-that-reveals-a-childs-true-age-has-promise-but-ethical-pitfalls-126676.
5. Bell, Christopher G., et al. “DNA Methylation Aging Clocks: Challenges and Recommendations.” Genome Biology, vol. 20, no. 1, 2019, doi:10.1186/s13059-019-1824-y.
6. Kita, Joe. “Age Discrimination Still Thrives in America.” AARP, 30 Dec. 2019, www.aarp.org/work/working-at-50-plus/info-2019/age-discrimination-in-america.html.
7. Spiggle, Tom. “The Legality of DNA Testing In The Workplace.” Forbes, Forbes Magazine, 11 Aug. 2020, www.forbes.com/sites/tomspiggle/2020/08/11/the-legality-of-dna-testing-in-the-workplace/?sh=1ea22d1d6fb5.

Data Obfuscation and the U.S. Census

Data Obfuscation and the U.S. Census
By Zain Khan | May 28, 2021

The decision by the U.S. Census Bureau to introduce deliberate errors into their data and now potentially bring in “synthetic data” has researchers up in arms. The use of synthetic data involves the deliberate manipulation of census data in an effort to protect the identities of those involved. Some researchers oppose this action on the grounds that loss of accuracy will harm the data’s research potential.

The act of introducing noise and synthetic data is not at all a new practice and is known as data obfuscation. Data obfuscation is the “process of replacing sensitive information with data that looks like real production information, making it useless to malicious actors” (Imperva). The need for obfuscation comes from the fact that personal data can be tied to individual identities by malevolent third parties. Data obfuscation aims to erase the dangers posed by the mass collection of personal data in a survey like the U.S. census. In fact compliance with data obfuscation is often regulated under compliance standards such as the EU’s General Data Protection Regulation (GDPR). Data obfuscation is especially important when dealing with population census data.
In The Dark Side of Numbers, William Seltzer and Margo Anderson delve into some of the historical atrocities that have been enabled by population census data. In focusing on the dangers of this kind of data, the duo establishes a three-way classification system for data that can be used to target vulnerable individuals or groups (Seltzer and Anderson). The three groups are identified as Macro, Meso, and Micro data.

The table above defines the three categories where macro data concerns census data we are more familiar with that reflect large geographic areas, micro data are individual level data that can be seen as the lowest level of data, and lastly meso data as statistical results for small geographic areas.

For the purpose of the 2020 U.S. Census, the Census Bureau seeks to synthesize micro data to protect individuals. Critics of the decision claim that the addition of such inaccuracies will undermine the credibility of the census. University of Minnesota demographer Steven Ruggles goes as far to say that the addition of synthetic data “will not be suitable for research” (AP News).
Despite the claims made by Ruggles, Seltzer and Anderson’s work highlights the need for data obfuscation and safeguards in census data. The duo outlines a list of safeguards that should be used in tandem in order to protect against the misuse of previous data in the past. The use of data obfuscation in this context falls under what Seltzer and Anderson define as “Methodological and Technological Safeguards”.

While the researchers themselves are held to ethical standards, their use of micro data results in the publication of meso data. As seen in Seltzer and Anderon’s work, meso data is where the danger of census data lies as the data can be used to target vulnerable population subgroups. It only takes a brief skim of Seltzer and Anderson’s work to see the dire consequences that failing to upkeep these basic safeguards has held.

Data obfuscation is necessary for census data that deals with micro data and without it, the census itself is inherently dangerous. Researchers such as Ruggles who claim that the Bureau is “inventing imaginary threats to confidentiality” (AP News) fail to recognize the historical impact that improper population data collection practices has held. While there may be a small level of inaccuracies in the data, no amount of research is worth risking the well being of American citizens. Safeguards such as the introduction of synthetic data are the bare minimum for data collection practices moving forward.

References

Machine Learning Bias in Word Embedding Algorithms

Machine Learning bias in Word Embedding Algorithms
By Anonymous | May 28, 2021

I have studied a Natural language processing course at UC Berkeley for the last few months. One of the key points in NLP is word embedding. For those who are not familiar with NLP, here is a simple explanation about word embedding: It is a learned representation for text where words that have the same meaning have a similar representation.Word embedding methods learn a real-valued vector representation for a predefined fixed-sized vocabulary from a corpus of text.The learning process is either joint with the neural network model on some task, such as document classification, or is an unsupervised process, using document statistics.

In word embedding models, each word in a given language is assigned to a high-dimensional vector, such that the geometry of the vectors captures relations between the words. For example, the cosine similarity between the vector representation of the word King will be closer to the word Queen than to television, because King and Queen are similar meaning words but television is quite different. To preserve the semantics of the natural language in its entirety, word embeddings are usually trained on huge databases, for example, Google News (about 100 billion words). And the huge database brings us the algorithm gender bias.

In our word embedding algorithm, we have:

Man + royal = King
Woman + royal = Queen

and that looks great. That’s how we want our algorithm to learn our language. But when comes to some occupation, the algorithm could be biased. One example would be:

Man + medical occupation = Doctor
Woman + medical occupation = Nurse

From this example, we could see ‘stereotype’ with our algorithm. If the man is corresponding to the doctor, the woman should be corresponding to the doctor as well. However, the algorithm doesn’t think in that way.

And here is a more biased example with our word embedding algorithm:
If we have Man corresponding to programer and try to find what woman corresponding to, the algorithm would bring us, homemaker. The result is unbelievable and we need to think about why this is happening. The word programmer and homemaker are neutral to gender by its definition, but the word embedding model trained on the google news corpus finds that programmer is closer with male than female because of the social perception we have of this job, and that is how people use English. Therefore, we can don’t simply blame the word embedding algorithm as biased. We should think about ourselves, are we think or speak in a biased way and the algorithm catches those biased points and generate the biased model.

Dive more deeply into gender bias in word embedding, here are a two-word cloud showing the K-nearest embedding neighbors of man and women.

In the man’s subgroup, we find businessman, orthopedic_surgeon, magician, mechanic, etc, and In the woman’s group, we find housewife, registered_nurse, midwife, etc. As we can see from the result, the word embedding algorithm is catching people’s stereotypes about the occupation in the google news dataset.

A more shocking result is the leadership words people use to describe people.

There are over 160 leadership words that use to describe a man but only 31 leadership words use to describe a woman. The number of words already speaks a lot. If you are interest, you can use this website to generate more word cloud (http://wordbias.umiacs.umd.edu/)

Until now, a lot of people are still fighting for gender equality. The word embedding algorithm is ‘biased’. But to me, the real question is: Is the algorithm biased or the language, from which the algorithm learned, is biased. That’s the confounding problem that we need to think about and work on.

References

Gender Bias, Umiacs.umd, wordbias.umiacs.umd.edu/.
Buonocore, Tommaso. “Man Is to Doctor as Woman Is to Nurse: the Dangerous Bias of Word Embeddings.” Medium, Towards Data Science, 3 Mar. 2020, towardsdatascience.com/gender-bias-word-embeddings-76d9806a0e17