Machine Learning and Misinformation

Machine Learning and Misinformation
Varun Dashora | July 5, 2022

Artificial intelligence can revolutionize anything, including fake news.

Misinformation and disinformation campaigns are top societal concerns, with discussion about foreign interference through social media coming to the foreground in the 2016 United States presidential election [3]. Since a carefully crafted social media presence garners vast amounts of influence, it is important to understand how machine learning and artificial intelligence algorithms can be used in the future in not just elections, but also in other large-scale societal endeavors.

Misinformation: Today and Beyond

While today’s bots lack effectiveness in spinning narratives, the bots of tomorrow will certainly be more formidable. Take, for instance, Great Britain’s decision to leave the European Union. Strategies mostly involved obfuscation instead of narrative spinning, as noted by Samuel Woolley, a Professor of University of Texas-Austin who investigated Brexit bots during his time at the Oxford Internet Institute [2]. Woolley notes, “the vast majority of the accounts were very simple,” and functionality was largely limited to “boost likes and follows, [and] to spread links” [2]. Cutting-edge research indicates significant potential for fake news bots. A research team at OpenAI working on language models outlined news generation techniques. Output from these algorithms is not automatically fact-checked, leaving these models free reign to “spew out climate-denying news reports or scandalous exposés during an election” [4] With enough sophistication, bots linking to AI-generated fake news articles could alter public perception if not checked properly.

Giving Machines a Face

Machine learning has come a long way in rendering realistic images. Take, for instance, the two pictures below. Which one of those pictures looks fake?

Is this person fake?
Or is this person fake?

 

You might be surprised to find out that I’ve posed a trick question–they’re both generated by an AI accessible at thispersondoesnotexist.com  [ 7]. The specific algorithm, called a generative adversarial network, or GAN, looks through a dataset, in this case of faces, in order to generate a new face image that could have feasibly been included in the original dataset. While such technology inspires wonder and awe, it also represents a new type of identity fabrication capable of contributing to future turmoil by giving social media bots a face and further legitimizing their fabricated stories [1]. These bots will show more sophistication than people think, which makes sifting real news from fake news that much more challenging. The primary dilemma posed questions and undermines “how modern societies think about evidence and trust” [1]. While bots rely on more than having a face to influence swaths of people online, any reasonable front of legitimacy helps their influence.

Ethical Violations

In order to articulate the specific ethical violations present, the Belmont Report is crucial to understand. According to the Belmont Report, a set of ethical guidelines used to evaluate the practices of scientific studies and business ventures, the following ideas can be used to gauge ethical harm: respect of individual agency, overall benefit to society, and fairness in benefit distribution [6]. The respect tenet is in jeopardy because of the lack of consent involved in viewing news put out by AI bots. In addition, the very content that these bots put out potentially distorts informed consent for other topics, creating ripple effects throughout society. The aforementioned Brexit case serves as an example; someone contemplating their vote on the day of the referendum vote would have sifted through a barrage of bots retweeting partisan narratives [2]. In such a situation, it is entirely possible that this hypothetical person would have ended up being influenced by one of these bot-retweeted links. Given the future direction of artificially intelligent misinformation bots, fake accounts and real accounts will be more difficult to distinguish, giving rise to a more significant part of the population being influenced by these technologies.

In addition, the beneficence and fairness clauses of the Belmont report are also in jeopardy. One of the major effects of AI-produced vitriol is more polarization. According to Philip Howard and Bence Kollanyi, social media bot researchers, one effect of increased online polarization is “a rise in what social scientists call ‘selective affinity,’” which means people will start to shut out opposing voices due to the increase in vitriol [3]. These effects constitute an obvious violation of beneficence to the broader society. In addition, it is entirely possible that automated narratives spread by social media bots target a certain set of individuals. For example, the Russian government extensively targeted African Americans during the 2016 election [5]. The differential in impact means groups of people are targeted and misled unfairly. With the many ethical ramifications bots can have on society, it is important to consider mitigations for artificially intelligent online misinformation bots.

References

– [1] https://www.theverge.com/tldr/2019/2/15/18226005/ai-generated-fake-people-portraits-thispersondoesnotexist-stylegan

– [2] https://www.technologyreview.com/2020/01/08/130983/were-fighting-fake-news-ai-bots-by-using-more-ai-thats-a-mistake/

– [3] https://www.nytimes.com/2016/11/18/technology/automated-pro-trump-bots-overwhelmed-pro-clinton-messages-researchers-say.html

– [4] https://www.technologyreview.com/2019/02/14/137426/an-ai-tool-auto-generates-fake-news-bogus-tweets-and-plenty-of-gibberish/

– [5] https://www.bbc.com/news/technology-49987657

– [6] https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html

– [7] thispersondoesnotexist.com

 

Culpability in AI Incidents: Can I Have A Piece?

Culpability in AI Incidents: Can I Have A Piece?
By Elda Pere | June 16, 2022

With so many entities deploying AI products, it is not difficult to distribute blame when things go wrong. As data scientists, we should keep the pressure on ourselves and welcome the responsibility to create better, fairer learning systems.

The question of who should take responsibility for technology-gone-wrong situations is a messy one. Take the case mentioned by Madeleine Clare Elish in her paper “Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction”. If an autonomous car gets into an accident, is it the fault of the car owner that allowed this setting? Is it the fault of the engineer that built the autonomous functionality? The manufacturer that built the car? The city infrastructure’s unfriendliness towards autonomous vehicles? How about in the case when banks disproportionately deny loans to marginalized communities, is it the fault of the loan officer, who they buy information from, or the repercussions of a historically unjust system? The cases are endless, ranging from misgendering on social media platforms to misallocating resources on a national scale.

A good answer would be that there is a share of blame amongst all parties, but however true this may be, it does not prove useful in practice. It just makes it easier for each party to pass the baton and take away the pressure of doing something to resolve the issue. With this posting,  in the name of all other data scientists I hereby take on the responsibility to resolve the issues that a data scientist is skilled to resolve. (I expect rioting on my lawn sometime soon, with logistic regressions in place of pitchforks.)

Why Should Data Scientists Take Responsibility?

Inequalities that come from discriminating against demographic features such as age, gender or race occur because the users are categorized into specific buckets and stereotyped as a group. The users are categorized in this way because the systems that make use of this information need buckets to function. Data scientists control these systems. They choose between a logistic regression and a clustering algorithm. They choose between a binary gender option, a categorical gender with more than two categories, or a free form text box where users do not need to select from a pre-curated list. While this last option most closely follows the user’s identity, the technologies that make use of this information need categories to function. This is why Facebook “did not change the site’s underlying algorithmic gender binary” despite giving the user a choice of over 50 different genders to identify with back in 2014.

So What Can You Do?

While there have been a number of efforts in the field of fair machine learning, many of them are still in the format of a scientific paper and have not been used in practice, especially with the growing interest demonstrated in Figure 1.

Figure 1: A Brief History of Fairness in ML (Source)

Here are a few methods and tools that are easy to use and that may help in practice.

  1. Metrics of fairness for classification models such as demographic parity, equal opportunity and equalized odds. “How to define fairness to detect and prevent discriminatory outcomes in Machine Learning” describes good use cases and potential things that could go wrong when using these metrics.
  1. Model explainability tools that increase transparency and make it easier to spot discrepancies. Popular options listed by “Eliminating AI Bias” include:
  1. LIME (Local Interpretable Model-Agnostic Explanations),
  2. Partial Dependence Plots (PDPs) to decipher how each feature influences the prediction.
  3. Accumulated Local Effects (ALE) plots to decipher individual predictions rather than aggregations as used in PDPs.
  1. Toolkits and fairness packages such as:
  1. The What-if Tool by Google,
  2. The FairML bias audit toolkit,
  3. The Fair ClassificationFair Regression or Scalable Fair Clustering Python packages.

Parting Words        

My hope for these methods is that they inform data science practices that have sometimes gained too much inertia, and that they encourage practitioners to model beyond the ordinary and choose methods that could make the future just a little bit better for the people using their products. With this, I pass the baton to the remaining culprits to see what they may do to mitigate –.

This article ended abruptly due to data science related rioting near the author’s location.

Protests in the Era of Data Surveillance

Protests in the Era of Data Surveillance
By Niharika Sitomer | June 16, 2022

Modern technology is giving law enforcement the tools to be increasingly invasive in their pursuit of protesters – but what can we do about it?

In the summer of 2020, the country exploded with Black Lives Matter protests spurred by the murder of George Floyd. Even today, the wave of demonstrations and dissent has not ended, with many protests cropping up regarding the recent developments on the overturning of Roe v. Wade and the March for Our Lives events in response to gun violence tragedies. These movements are a sign of robust public involvement in politics and human rights issues, which is a healthy aspect of any democracy and a necessary means of holding governing bodies accountable. However, the use of technological surveillance by law enforcement to track protesters is a dangerous and ongoing occurrence that many may not even realize is happening.

The use of facial recognition technology poses a significant threat for wrongful arrests of innocent people due to misclassification by untested and unfairly developed algorithms. For instance, the software used by the London Metropolitan Police achieved only 19% accuracy when tested by Essex University. Moreover, many of these algorithms do not have adequate racial diversity in their training sets, leading the software to err and wrongfully classify mostly on racial minorities. The locations of deployment for facial recognition systems outside of protests are also extremely racially determined, with the brunt falling disproportionately on black neighborhoods. This represents a huge disparity in policing practices and increases the likelihood that innocent black citizens will be misidentified as protesters and arrested. What’s more, the use of facial recognition by law enforcement is largely unregulated, meaning that there are few repercussions for the harms caused by these systems.

It is not only the threat of uninvolved people being targeted, however, that makes police surveillance so dangerous. People who attend protests without endangering public safety are also at risk, despite constituting the vast majority of protesters (93% of summer 2020 protests were peaceful, and even violent protests contain many non-violent protesters). Drone footage is frequently used to record and identify people in attendance at protests, even if their actions do not warrant such attention. Perhaps even more concerning are vigilante apps and the invasion of private spaces. During the George Floyd protests, the Dallas Police launched an app called iWatch, where the public could upload footage of protesters to aid in their prosecution. Such vigilante justice entails the targeting of protesters by those who oppose their causes and seek to weaken them, even if doing so results in unjust punishments. Additionally, LAPD requested users of Ring, Amazon’s doorbell camera system, to provide footage of people who could potentially be connected to protests, despite it being a private camera network where people were unaware they could be surveilled without a warrant. Violations of privacy also occur on social media, as the FBI has requested personal information of protest planners from online platforms, even if their pages and posts had been set to private.

One of the most invasive forms of police surveillance of protesters is location tracking, which typically occurs through RFID chips, mobile technology, and automated license plate reader systems (ALPRs). RFID chips use radio frequencies to identify and track tags on objects, allowing both the scanning of personal information without consent and the tracking of locations long after people have left a protest. Similarly, mobile tracking uses signals from your phone to determine your location and access your private communications, and it can also be used at later times to track down and arrest people who had been in attendance at previous protests; such arrests have been made in the past without real proof of any wrongdoing. ALPRs can track protestors’ vehicles and access databases containing their locations over time, effectively creating a real-time tracker.

You can protect yourself from surveillance at protests by leaving your phone at home or keeping it turned off as much as possible, bringing a secondary phone you don’t use often, using encrypted messages to plan rather than unencrypted texts or social media, wearing a mask and sunglasses, avoiding vehicle transportation if possible, and changing clothes before and after attending. You should also abstain from posting footage of protests, especially that in which protesters’ faces or other identifiable features are visible. The aforementioned methods of law enforcement surveillance are all either currently legal, or illegal but unenforced. You can petition your local, state, and national representatives to deliver justice for past wrongs and to pass laws restricting police from using such methods on protesters without sufficient proof that the target of surveillance has endangered others.

Generalization Furthers Marginalization

Generalization Furthers Marginalization
By Meer Wu | June 18, 2022

In the world of big data where information is currency, people are interested in finding trends and patterns hidden within mountains of data. The cost of favoring these huge sets of data is that often, the relatively small amounts of data representing marginalized populations can often be overlooked and misused. How we currently deal with such limited data from marginalized groups is more a convenient convention than a true, fair representation. Two ways to better represent and understand marginalized groups through data is to ensure that they are proportionately represented and that each distinct group has its own category as opposed to being lumped together in analysis.

How do we currently deal with limited demographic data of marginalized groups?

Studies and experiments where the general population is of interest typically lack comprehensive data of marginalized groups. Marginalized populations are “those excluded from mainstream social, economic, educational, and/or cultural life,” including, but not limited to, people of color, the LGBTQIA+ community, and people with disabilities [[1]](#References:). There are a number of reasons that marginalized populations tend to have small sample sizes. Some common reasons include studies intentionally or unintentionally excluding their participation [[2]](#References:), people unwilling to disclose their identities in fear of potential discrimination, or the lack of quality survey design that accurately capture their identities [[3]](#References:). These groups with small sample sizes often end up being lumped together or excluded from the analysis altogether.

Disaggregating the “Asian” category: The category “Asian-American” can be broken down into many subpopulations.  Image source: [Minnesota Compass](https://www.mncompass.org/data-insights/articles/race-data-disaggregation-what-does-it-mean-why-does-it-matter)
What is the impact of aggregating or excluding these data?

While aggregating or excluding data of marginalized groups ensures anonymity and/or helps establish statistically meaningful results, it can actually cause harm to them. Excluding or aggregating marginalized communities erases their identities, preventing access to fair policies guided by research, thus perpetuating the very systemic oppression that causes such exclusion in the first place. For example, the 1998 Current Population Survey reported that 21% Asian-Americans and Pacific Islanders (AAPI) lack health insurance, but a closer look into subpopulations within AAPI revealed that only 13% of Japanese-Americans actually lacked insurance coverage while 34% of Korean-Americans were uninsured [[4]](#References:). The exclusion of pregnant women in clinical research jeopardizes fetal safety and prevents their access to effective medical treatment [[5]](#References:). The results of marginalized groups should never be excluded and should not be lumped together so that each population’s results are not misrepresented.

What happens when we report unaggregated results instead?

Reporting unaggregated data, or data that is separated into small units, can help provide more accurate representation, which will help create better care, support, and policies for marginalized communities. On the other hand, it may pose potential threats to individual privacy when the sample size is too small. This is often used as the motivation to not report data of marginalized populations. While protecting anonymity is crucial, aggregation and exclusion should not be solutions to the problem. Instead, efforts should be made to increase sample sizes of marginalized groups so that they are proportionally represented in the data.

While there are statistical methods that will give accurate results without risking individual privacy, these methods are more reactive than preventative towards the actual problem at hand- the lack of good quality data from marginalized populations. One way to ensure a representative sample size is to create categories that are inclusive and representative of marginalized groups. A good classification system of racial, gender, and other categories should make visible populations that are more nuanced than what traditional demographic categories offer. For example, using multiple-choice selection and capturing changes in identities over time in surveys can better characterize the fluidity and complexities of gender identity and sexual orientation for the LGBTQ+ community [[3]](#References:). Having more comprehensive data of marginalized groups will help drive more inclusive policy decisions. Over time, the U.S. Census has been adding more robust racial categories to include more minority groups. Until 1860, American Indian was not recognized as a race category on the Census, and 2000 marked the first year the Census allowed respondents to select more than one race category. Fast forwarding to 2020, people who marked their race as Black or White were asked to describe their origins in more detail [[6]](#References:). The census has yet to create a non-binary gender category, but for the first time in 2021, U.S. Census Bureau’s Household Pulse Survey includes questions about sexual orientation and gender identity [[7]](#References:). This process will take time, but it will be time well spent.

U.S. Census Racial Categories in 1790 vs. 2020: Racial categories displayed in the 1790 U.S. Census (left) and in the 2020 U.S. Census (right). This image only shows a fraction of all racial categories displayed in the 2020 U.S. Census. Image source: [Pew Research Center](https://www.pewresearch.org/interactives/what-census-calls-us/)
References:
[[1]](https://doi.org/10.1007/s10461-020-02920-3) Sevelius, J. M., Gutierrez-Mock, L., Zamudio-Haas, S., McCree, B., Ngo, A., Jackson, A., Clynes, C., Venegas, L., Salinas, A., Herrera, C., Stein, E., Operario, D., & Gamarel, K. (2020). Research with Marginalized Communities: Challenges to Continuity During the COVID-19 Pandemic. AIDS and Behavior, 24(7), 2009–2012. https://doi.org/10.1007/s10461-020-02920-3

[[2]](https://doi.org/10.1371/journal.pmed.0030019) Wendler, D., Kington, R., Madans, J., Wye, G. V., Christ-Schmidt, H., Pratt, L. A., Brawley, O. W., Gross, C. P., & Emanuel, E. (2006). Are Racial and Ethnic Minorities Less Willing to Participate in Health Research? PLoS Medicine, 3(2), e19. https://doi.org/10.1371/journal.pmed.0030019

[[3]](https://doi.org/10.1177/2053951720933286) Ruberg, B., & Ruelos, S. (2020). Data for queer lives: How LGBTQ gender and sexuality identities challenge norms of demographics. Big Data & Society, 7(1), 2053951720933286. https://doi.org/10.1177/2053951720933286

[[4]](http://healthpolicy.ucla.edu/publications/Documents/PDF/Racial%20and%20Ethnic%20Disparities%20in%20Access%20to%20Health%20Insurance%20and%20Health%20Care.pdf) Brown, E. R., Ojeda, V. D., Wyn, R., & Levan, R. (2000). Racial and Ethnic Disparities in Access to Health Insurance and Health Care. UCLA Center for Health Policy Research and The Henry J. Kaiser Family Foundation, 105.

[[5]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2747530/?report=classic) Lyerly, A. D., Little, M. O., & Faden, R. (2008). The second wave: Toward responsible inclusion of pregnant women in research. International Journal of Feminist Approaches to Bioethics, 1(2), 5–22. https://doi.org/10.1353/ijf.0.0047

[[6]](https://www.pewresearch.org/fact-tank/2020/02/25/the-changing-categories-the-u-s-has-used-to-measure-race/) Brown, A. (2020, February 25). The changing categories the U.S. census has used to measure race. Pew Research Center. https://www.pewresearch.org/fact-tank/2020/02/25/the-changing-categories-the-u-s-has-used-to-measure-race/

[[7]](https://news.stlpublicradio.org/politics-issues/2020-03-17/the-2020-census-is-underway-but-nonbinary-and-gender-nonconforming-respondents-feel-counted-out) Schmid, E. (2020, March 17). The 2020 Census Is Underway, But Nonbinary And Gender-Nonconforming Respondents Feel Counted Out. STLPR. https://news.stlpublicradio.org/politics-issues/2020-03-17/the-2020-census-is-underway-but-nonbinary-and-gender-nonconforming-respondents-feel-counted-out

Cycle tracking apps: what they know and who they share it with

Cycle tracking apps: what they know and who they share it with
By Kseniya Usovich | June 16, 2022

In the dawn of potential Roe v. Wade overturn we should be especially aware of who owns the data about our reproductive health. Cycle and ovulation apps, like Flo, Spot, Cycles and others, have been gaining popularity on the market in recent years. Those range from simple menstrual cycle calendars to full-blown ML-empowered pregnancy “planners”. The ML-support usually comes with a premium subscription. The kinds of data they collect ranges from name, age, and email to body temperature, pregnancy history and even your partner’s contact info. Most health and body-related data is entered by a user manually or through a consented linkage to other apps and devices such as Apple HealthKit and Google Fit. Although there is not much research on the quality of their predictions, these apps seem to be helpful overall even if it is just to make people more aware of their ovulation cycles.

The common claim in these apps’ privacy policies is that the information you share with them will not be shared externally. This, however, comes with caveats as they do share the de-identified personal information with third parties and are also required to share it with the law authorities in case of receiving a legal order to do so. Some specifically state that they would only share your personal (i.e. name, age group, etc.) and not health information if they are required by law. However, take it with a grain of salt as one of the more popular period tracking companies, Flo, has been sharing their users’ health data for marketing purposes from 2016 to 2019 without putting their customers in the know. And that was just for marketing; it is unclear if they can refuse sharing a particular user’s health information such as period cycles, pregnancies, and general analytics under a court order.

This becomes an even bigger concern in the light of the current political situation in the U.S. I am, of course, talking about the potential Roe v. Wade overturn. You see, if we lose the federal protection of the abortion rights, every state will be able to impose their own rules concerning reproductive health. This implies that some states will most likely prohibit abortion from very early on in the pregnancy; where currently the government can fully prohibit it only in the last trimester. This can mean that people that live in the states where abortion rights are limited to none will be bounded by these three options: giving birth, performing an abortion secretly (i.e. illegally under their state’s law), or traveling to another state. There is a whole Pandora box of classicism, racism, and other issues concerning this narrow set of options that I won’t be able to discuss since this post has a word limit. I will only mention that this set becomes even more limited if you simply have fewer resources or are dealing with health concerns that will not permit you to act on one or more of these “opportunities”.

However, let’s circle back to that app you might be keeping as your period calendar or a pocket-size analyst of all things ovulation. We, as users, are in this zone of limbo where without sharing enough information, we can’t get good predictions; but with oversharing, we always are under the risk of entrusting our private information in the hands of the service that might not be as protective of it as they implied. Essentially, the ball is still in your court and you can always request for the removal of your data. But if you live in the region that sees an abortion as a crime; beware of who may have a little too much data about your reproductive health journey.

References

[1] https://cycles.app/privacy-policy
[2] https://flo.health/privacy-portal
[3] https://www.cedars-sinai.org/blog/fertility-and-ovulation-apps.html
[4] https://www.nytimes.com/2021/01/28/us/period-apps-health-technology-women-privacy.html

Images:
[1] https://www.apkmonk.com/app/com.glow.android/
[2] https://www.theverge.com/2021/1/13/22229303/flo-period-tracking-app-privacy-health-data-facebook-google

Experiments That Take Generations to Overcome

Experiments That Take Generations to Overcome
By Anonymous | June 16, 2022

‘”Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select – doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations and the race of his ancestors. (Watson, 1924)

The field of psychology has advanced so much in the last century, not just in terms of the scientific knowledge, but also ethics and human rights. A testament to that is one of the most ethically dubious experiments, the Little Albert experiment, which weíll explore in this blog in how it relates to the Beneficence principle of the Belmont Report, and how it continues to impact us today in ways we may not realize. (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979)

As some background, in the 1920s, John B. Watson, a John Hopkins Professor, was interested in reproducing Ivan Pavlovís findings on classical conditioning in babies. Classical conditioning is when ìtwo stimuli are linked together to produce a new learned response in a person or animal (McLeod, 2018). Ivan Pavlov was famous for his experiment of getting his dogs to salivate at the sound of a bell by giving them food every time he sounded the bell, so that at first they salivated at the sight of food, but eventually learned to salivate at just the sound of the bell. Similarly, the Little Albert experiment was performed on a 9-month-old Albert B. At the start of the experiment, Little Albert was presented with a rat, a dog, a rabbit, and a Santa Claus mask, and he was not afraid of any of them, but then every time he touched any of them, the scientists struck a metal bar behind him and eventually, he was conditioned to be terrified of those animals and the Santa Claus mask. (Crow, 2015; McLeod, 2018).

The principle of Beneficence in the Belmont Report requires that we maximize benefits and minimize harms to both individuals and society (National Commission, 1979). The most glaring weakness of the experiment in this principle is that Watson did not even bother to reverse the results of his experimentation on the baby.

Seeing that the experiment did work in making Little Albert terrified of rats and anything furry, itís safe to believe that successfully reversing this result was not only possible but an easy thing to do. Even an unsuccessful attempt at reversal would make those of us analyzing it in the present day have a slightly different opinion of the experiment. While itís possible for the conditioned response to wear off, a phenomenon known as Extinction, it can still return (albeit in weaker form) after a period, a phenomenon known as Spontaneous Recovery (McLeod 2018). (National Commission, 1979; Mcleod 2018).

While the individual was harmed, what about society as a whole? Watson did the experiment to show how classical conditioning can not only be applied to humans, but explain everything about us, going so far as to deny the existence of mind and consciousness. Whether the latter points are true or not, the experiment contributed to the field of human psychology in important ways, from understanding addictions to classroom learning and behavior therapy (McLeod 2018). Today, our understandings are not complete by any means, but we do take for granted much of the insights gained. Unfortunately, it goes the other way too. (McLeod 2018)
Watsonís Little Albert experiment is undoubtedly connected to his child-rearing philosophy. After all, he did believe he could raise infants to become anything, from doctors to thieves. He essentially believed children could be trained like animals, and he ìadmonished parents not to hug, coddle or kiss their infants and young children in order to train them to develop good habits early onî (Parker, Nicholson, 2015). While modern culture has fought against a lot of our traditional views on parenting, and even classify some of it as ìchild abuse,î Watsonís views leave behind a legacy in our dominant narratives. Many still believe in ìtough loveî methods, such as talking down to children or talking to them harshly, corporal punishment, shaming, humiliation, and various others, especially if they grew up with those methods and believe they not only turned out fine but also became better people as a result of it. Others, such as John B. Watsonís very own granddaughter Mariette Hartley, and all the families she wrote about in her book Breaking the Silence, have experienced suicide and depression as the legacy left behind by Watsonís teachings. Even those who turned out fine may ìstill suffer in ways we donít realize are connected to our early childhood years.î (Parker, Nicholson, 2015)
While both hard scientific knowledge and human ethics have advanced unprecedentedly in the past century, it does not mean weíre completely emancipated from the repercussions of ethically dubious experiments and experimentation methods of the past. Harm done to either individuals or groups in an experiment can not only last a lifetime for those subjects but carry on for generations and shape our entire culture around it. To truly advance both knowledge and ethics, itís imperative that we are aware of this dark history and remember it, especially with how the Little Albert experiment has influenced and continues to influence our parenting methods, because ìnow that we know better, we must try to do better for our children (Parker, Nicholson, 2015).î

References:
Crow, J. (2015, January 29). The Little Albert Experiment: The Perverse 1920 Study That Made a Baby Afraid of Santa Claus & Bunnies. Open Culture. https://www.openculture.com/2015/01/the-little-albert-experiment.html
McLeod, S. A. (2018, August 21).†Classical conditioning. Simply Psychology. www.simplypsychology.org/classical-conditioning.html
Parker, L. and Nicholson, B. (2015, November 20). This Childrenís Day: Itís time to break Watsonís legacy in childrearing norms. APtly Said. https://attachmentparenting.org/blog/2015/11/20/this-childrens-day-its-time-to-break-watsons-legacy-in-child-rearing-norms/
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979, April 18). The Belmont Report. Retrieved May 17, 2022, from https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf

Digital health in the metaverse: overview of the landscape and legal considerations

Digital health in the metaverse: overview of the landscape and legal considerations
By Anonymous | June 16, 2022

Key takeaway: A brave new world for healthcare innovation, the metaverse presents sci fi-like solutions, from immersive exposure therapy to whole body digital twins. But, like all new technology, it brings its own challenges of tackling health inequities and data privacy.

[13] Example of virtual reality headsets.

As the next generation of the internet, the metaverse promises immersive, three-dimensional experiences through digital marketplaces and social interactions [1]. In the context of digital health, the metaverse changes the relationship between people and technology, with users experiencing within or alongside virtual content, rather than interacting with digital products and services. Right now, digital health is predominantly products and solutions that allow patients and providers to view, share, exchange or create digital content. Some product examples are entering patient data into electronic health records, sharing video during a telemedicine consultation or sending payment through online portals [2]. In the digital health metaverse, product offerings shift to patients attending virtual reality group therapy sessions, surgeons planning out their procedures on holograms and those with cognitive disabilities practicing learning social cues through simulated social interactions.

[14] Human anatomy and physiology.

While there are a wide range of possibilities for healthcare within the metaverse, the two most common categories of metaverse applications in digital health are immersive environments and digital twins [3].

Immersive environments

These are virtual or hybrid worlds in which providers and consumers engage with each other for educational, assistive, or therapeutic purposes. The biggest category of digital health in the metaverse, immersive environments are accessed through virtual reality (VR) or a hybrid of real-world and virtual components that come together via augmented reality (AR) technology or holograms [3]. Educational applications range from medical libraries to surgical training platforms and immersive emergency situations for clinicians to practice without worrying about real-world consequences [4][5]. In the operating room, some VR headsets help surgeons control minimally-invasive surgical robots while others help them place implants [6][7]. Therapeutic metaverse environments allow for specialized settings for different kinds of interventions, such as allowing patients to try exposure therapy virtually to address phobias [8].

Digital twins

Representations of real-world entities that exist in virtual worlds, digital twins can be manipulated to extract insights for healthcare decision making. In healthcare, digital twins can be organs, individuals, patients or populations. And although they are a form of synthetic data, because they are modeled off of real entities, these digital twins are often connected in an ongoing manner to their real-world counterparts [3]. Starting with organs and muscle groups, cardiac digital twins are being pioneered by large corporations, simulations that reflect the molecular structure and biological function of individual patients’ hearts [9]. This allows doctors to simulate how each patient’s heart would respond to different medications or surgeries. Bones and muscle groups are also on the forefront, allowing scientists to simulate how medical devices and implants may interact or degrade within the patient’s body over time [10]. Beyond organs, whole-body digital twins of individuals are being created, where patient vitals, scans, medical history and genetic tests are combined to create simulations of patient anatomy and physiology [11].

Healthcare hurdles & legal considerations

Healthcare applications in the metaverse can compound health inequities related to device ownership, digital literacy and internet accessibility [3]. And the creation of virtual entities like digital twins and avatars raise new questions in patient health data and privacy [12]. In terms of legal considerations, health care providers and professionals must consider custody of digital assets, select a platform, register IP or file trademarks, secure blockchain domains to facilitate metaverse payments, and reserve metaverse rights [2]. The decentralized nature of the metaverse poses challenges to businesses that are used to having predictable law enforcement mechanisms to protect their legal interests. In addition, specifically for healthcare, questions of how traditional state-based licensure requirements apply to metaverse providers and whether blockchain technology of health data sharing complies with state and federal data privacy and security requirements are uncertain.

The rise of the metaverse has presented healthcare with endless possibilities, allowing providers, patients and businesses to interact in a way that was considered science fiction just a few years ago. However, like all new technologies, the metaverse brings its own hurdles and legal challenges.

References

[1] https://insights.omnia-health.com/technology/dawn-metaverse-healthcare

[2] https://www.natlawreview.com/article/metaverse-legal-primer-health-care-industry

[3]https://rockhealth.com/insights/digital-health-enters-the-metaverse/?mc_cid=46fab87845&mc_eid=2d0859afdf

[4] https://www.giblib.com

[5] https://www.healthscholars.com

[6] https://www.vicarioussurgical.com

[7] https://augmedics.com

[8] https://ovrhealth.com

[9] https://www.siemens-healthineers.com/perspectives/mso-solutions-for-individual-patients.html

[10] https://www.virtonomy.io

[11] https://q.bio

[12] https://xrsi.org/publication/the-xrsi-privacy-framework

[13] https://unsplash.com/s/photos/health-tech

[14] https://www.pexels.com/search/health%20tech/

The Tip of the ICE-berg: How Immigration and Customs Enforcement uses your data for deportation

The Tip of the ICE-berg: How Immigration and Customs Enforcement uses your data for deportation
By Anonymous | June 16, 2022

How Immigration and Customs Enforcement uses your data for deportation

Photo by SIMON LEE on Unsplash

The United States is a country made of immigrants. It is a great social experiment in anti-tribalism, meaning that there is no single homogenous group. Everyone came from somewhere with different ideals, ancestry, religion, and life goals that have all blended to make this country the multifaceted melting pot it is today. But for a country made of immigrants, the U.S. certainly goes the extra mile to find and punish those who would immigrate now by any means available to them.

Before Immigration and Customs Enforcement (ICE), the U.S. created the Immigration and Naturalization Service (INS) in 1933 which handled the country’s immigration process, border patrol, and enforcement.[1]  During the Great Depression, immigration rates dropped, and the INS shifted its focus to law enforcement.[2]  From the 1950s through the 1990s INS answered the public outcry of illegal aliens working in the U.S. by cracking down on immigration and enforcing deportation. However, the INS did not have a meaningful way of tracking those already in the United States and lacked a border exit system for those who had entered the country on visas. INS’s shortcomings were further highlighted in the aftermath of 9/11, as it was uncovered that at least two of the hijackers were in the U.S. on expired visas.[3] In response, the Homeland Security Act dissolved the INS and created ICE, Customs and Border Patrol (CBP), and U.S. Citizenship and Immigration Services (USCIS). ICE wasted no time acquiring data to fulfill its mission “[T]o protect America from the cross-border crime and illegal immigration that threaten national security and public safety.”[4]

According to a recent report by Wang et al., (2022)., ICE has contracts to increase its surveillance abilities by collecting and using data in the following categories:

  • Biometric Data– Fingerprint, face recognition programs, and DNA
  • Data Analysis – Combining of different data sources and management
  • Data Brokers – Private companies’ databases and those that sell third party data such as Thomson Reuters and Lexis Nexis that sell information such as utility data, credit data, DMV records
  • Geolocation Data – GPS tracking, license plate readers, and CCTV
  • Government databases – Access to government agencies databases not under the Department of Homeland Security
  • Telecom Interception – wiretapping, Wi-Fi interception, and translation services[5]

This practice violates the five basic principles (transparency, individual participation, limitations of purpose, data accuracy, and data integrity) set forth by the HEW Report to safeguard personal information contained in data systems and of which is the framework for the Fair Information Practice Principles (FIPPs).[6]  The Department of Homeland Security (DHS) which oversees ICE, uses a formulation of FIPPs in the treatment of Personally Identifiable Information (PII).[7] However, ICE has been able to blur these ethical lines in the pursuit of finding and deporting illegal immigrants.

ICE can access information on individuals who are American citizens, legal residents, and legal visa holders in addition to what they have classified as “criminal aliens” all without consent from the individual. ICE has purchased, collected, and used your information to enforce and deport immigrants (some with no criminal record) warrantlessly and without legislative, judicial, or public oversight (Wang et al., 2022). Unfortunately, this may be by design because purchasing data and combining it to identify individuals is not illegal and provides a way for government organizations to get around legal requirements for things such as warrants or reasonable cause.

Setting up basic utilities such as power, water, and internet or being detained, not convicted of a crime, where biometric data is taken (fingerprints, DNA) should not be counted as consent to have your data accessible to ICE.[8] Your trust in these essential services, state, and federal government agencies to safeguard and use your PII data for their originally intended purpose is being abused. ICE continues to spend $388 million per year of taxpayer dollars to purchase as much data as possible to build an unrestricted domestic surveillance system (Wang et al., 2022). While ICE’s stated focus is illegal immigration, what or whom is to stop them from targeting other forms of immigrants, legal resident aliens, dual citizens, or foreign-born American citizens?

This process gives Solove’s Taxonomy of Privacy[9] whiplash as it is rife with opportunity for surveillance, identification, insecurity, blackmail, increased accessibility among other government agencies, invasion, and intrusion for what should be a mundane piece of data they collect or purchase about you – repeatedly.

Photo by ThisisEngineering RAEng on Unsplash

As a nation of immigrants, we owe it to the newest arrivals, no matter how they got here, the basic expectation of privacy, respect, and protection. In contextualizing the exploitation of this data based on Nissenbaum’s[10] approach there is expectation of privacy, US Citizen’s believe they have a right to privacy including from the government, why should this be different based on your immigration status? When you apply for a basic service such as water and power you should not be fearful that your home could be raided at any moment. As an individual will only generate more data as technology and digitalization progresses. U.S. laws and policies with severe penalties for companies and the government need to evolve to provide protection of PII before you become a target of ICE or any other government agency looking to play the role of law enforcement.

Positionality and Reflexivity Statement:
I am an American biracial, able-bodied, cisgender, woman, wife, and mother. I belong to the upper-middle class, non-religious, unaffiliated voter block. I am the product of growing up among a family made up of newly arrived immigrants from all parts of the globe. Some came here through visa programs, some came sponsored and some overstayed the tourist visas, fell in love, and married Americans. Regardless of their path to legal status to live and work in the United States, nothing can describe the weight lifted and relief once granted approval to remain in this country to not only the individual being reviewed but the entire community of family and friends, they have created around them. I tell your this from experience but with limited information so that I don’t give ICE another data point or a reason to come looking.

[1] USCIS. (n.d.). USCIS. USCIS. Retrieved 2022, from https://www.uscis.gov/about-us/our-history

[2] USCIS History Office and Library. (n.d.). Overview of INS History. Https://Www.Uscis.Gov/Sites/Default/Files/Document/Fact-Sheets/INSHistory.Pdf. Retrieved May 19, 2022, from https://www.uscis.gov/sites/default/files/document/fact-sheets/INSHistory.pdf

[3] 9/11 Commission Report: Staff Monographs. (2004). Monograph on 9/11 and Terrorist Travel. National Commission on Terrorist Attacks Upon the United States. Retrieved May 19, 2022, from https://www.9-11commission.gov/staff_statements/911_TerrTrav_Ch1.pdf

[4] ICE. (n.d.). ICE. U.S. Immigration and Customs Enforcement. Retrieved May 19, 2022, from https://www.ice.gov

[5] Wang, N., McDonald, A., Bateyko, D., & Tucker, E. (2022, May). American Dragnet, Data-Driven Deportation in the 21st Century. Center on Privacy & Technology at Georgetown Law. https://americandragnet.org

[6] U.S. Department of Health, Education and Welfare. (July 1973) Records, Computers and the Rights of Citizens, Retrieved May 19, 2022, from https://aspe.hhs.gov/reports/records-computers-rights-citizens

[7] Fair Information Practice Principles (FIPPs) in the Information Sharing Environment. https://pspdata.blob.core.windows.net/webinarsandpodcasts/The_Fair_Information_Practice_Principles_in_the_Information_Sharing_Environment.pdf

[8] Department of Homeland Security. (2020, July). Privacy Impact Assessment for CBP and ICE DNA Collection, DHS/ALL/PIA-080. Retrieved May 19, 2022, from https://www.dhs.gov/sites/default/files/publications/privacy-pia-dhs080-detaineedna-october2020.pdf

[9] Solove, Daniel J. (2006). A Taxonomy of Privacy. University of Pennsylvania Law Review, 154:3 (January 2006), p. 477. https://ssrn.com/abstract=667622

[10] Nissenbaum, Helen F. (2011). A Contextual Approach to Privacy Online. Daedalus 140:4 (Fall 2011), 32-48.

Big Data to Battle COVID

Big Data to Battle COVID
By Anonymous | June 16, 2022

How China used new technology to control the spread of the deadly disease, but can it be recreated in the west and at what expense to personal privacy?

When COVID first emerged on the world stage in late 2019, many governments were unprepared to respond to a global pandemic. Despite being the initial source of the outbreak, China has arguably been one of the best at containing and limiting the spread of the virus. That success is often credited in part to new technologies they deployed, but critics have questioned whether it comes at the expense of personal privacy and if it could be reproduced in other countries.

Containing COVID through Smart Phones & A New “Health Code” System

Within weeks of the virus first being detected, two tech giants in China, Alibaba and Tencent quickly began developing competing but similar solutions to tackle the challenge of controlling the spread of the virus. On February 9th, WeChat and AliPay, two of the biggest smartphone apps used in China, launched “health code” systems that use multiple sources of data to determine your exposure risk and display a green, yellow or red QR code to indicate your risk level (Liang, 2020). Green indicates the user is healthy and able to travel freely. Yellow or red indicates a risk of exposure and the need to quarantine. The solutions were quickly adopted by local governments, employers and business across China and became mandatory in over 300 cities for entry into public places like restaurants, grocery stores or transportation. Within its first month, the WeChat app alone had been used over 6 billion times (CSIS, 2020).

Figure 1: Alipay Health Code app displaying a green QR code to indicate the user is low risk and can move freely (CSIS, 2020)

Solutions like this have been credited with helping China contain the spread of COVID much more effectively than most other countries. For example, in June of 2020, a rare “super spreader” event occurred at a wet market in Beijing and hundreds of individuals were infected. In a densely populated city of 21 million people, this represented a significant risk to China’s containment efforts but through the use of this system, mass testing and containment, the number of new infections was back to zero within three weeks (Tian, 2020).

Figure 2: A person’s health code is scanned and validated at a checkpoint (CSIS, 2020)

How does the health code system work?

The reality is there is very little public information about the exact algorithm used to determine an individual’s exposure risk “color code” or how the data is managed. In past interviews, the Alipay parent company has declined to answer questions and has only stated that “government departments set the rules and control the data” (Mozur, et al, 2021). What is known is that the system relies on a considerable amount of personal data. Users self-certify whether they have any COVID symptoms, information about their movements comes from their “check ins” at public places as well as geolocation data from the phone carriers and finally their personal interactions are mined from their digital transactions with others (Liang, 2020). This is combined with data from local health authorities about infections to determine whether the user was in a high-risk location or interacted with someone with known exposure (Zhang, 2022).

A Model for Other Countries?

Could this model be recreated in western countries? Not likely. The amount of personal data being collected by private/government entities, the lack of transparency about how its used, and the fact that the use of this system is required, conflicts with the trend towards data privacy in most western nations. Multiple privacy standards including the General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), Children’s Online Privacy Protection Act (COPPA) and the Health Insurance Portability and Accounting Act (HIPAA) would make navigating implementation difficult and the many concerns they raise about user consent and risk of misuse from this mass surveillance would need to be addressed. Additionally, an estimated 12% of the U.S. population don’t own a smartphone which risks harm to this population and would impact it’s success (Kolmar, 2022).

Most importantly is the human element. The health code system is a tool but its effectiveness depends on mass adoption by the population and governments would need to deploy significant resources to enforce its use like we see in China. Given the high priority many western countries place on individual freedoms and the challenges experienced in adopting even simple masking policies, it suggests the health code system is a good fit for China, but difficult to reproduce outside its borders. 

References

  1. Zhou, S. L., Jia, X., Skinner, S. P., Yang, W., & Claude, I. (2021). Lessons on mobile apps for COVID-19 from China. Journal of Safety Science and Resilience, 2(2). https://doi.org/10.1016/j.jnlssr.2021.04.002
  2. China’s Novel Health Tracker: Green on Public Health, Red on Data. (2020). CSIS: Center for Strategic & International Studies. https://www.csis.org/blogs/trustee-china-hand/chinas-novel-health-tracker-green-public-health-red-data-surveillance
  3. Liang, F. (2020). COVID-19 and Health Code: How Digital Platforms Tackle the Pandemic in China. Social Media + Society, 6(3), 205630512094765. https://doi.org/10.1177/2056305120947657
  4. O’Neill, P. H. (2020, October 9). A flood of coronavirus apps are tracking us. Now it’s time to keep track of them. MIT Technology Review. https://www.technologyreview.com/2020/05/07/1000961/launching-mittr-covid-tracing-tracker/
  5. Tian, T. (2020, July 16). Covid health code reveals China’s big data edge. Fidelity International. https://www.fidelityinternational.com/editorial/blog/covid-health-code-reveals-chinas-big-data-edge-71af0e-en5/
  6. Zhang, P. (2022, April 5). The colour-coded Covid app that’s become part of life in China – despite the red flags. South China Morning Post. https://www.scmp.com/news/china/science/article/3173123/colour-coded-covid-app-thats-become-part-life-china-despite-red
  7. Kolmar, C. (2022, June 7). S. Smartphone Industry Statistics [2022]: Facts, Growth, Trends, And Forecasts – Zippia. Zippia. https://www.zippia.com/advice/us-smartphone-industry-statistics/#:%7E:text=12%25%20of%20Americans%20own%20non,of%20Americans%20were%20smartphone%20owners.
  8. Mozur, P., Zhong, R., & Krolik, A. (2021, July 26). In Coronavirus Fight, China Gives Citizens a Color Code, With Red Flags. The New York Times. https://www.nytimes.com/2020/03/01/business/china-coronavirus-surveillance.html

 

STOP using ZOOM or risk being WATCHED by the Chinese Communist Party (CCP)

STOP using ZOOM or risk being WATCHED by the Chinese Communist Party (CCP)
By Anonymous | June 16, 2022

Tweet Long Lede Sentence:

Zoom, an American success story or quasi-spy agency to the CCP? Since its popularity, thanks to COVID, Zoom has terminated American users’ account by behest of the CCP, routed American users’ data through china, and has enjoyed an investigation by the Department of Justice.

Background

Unless you have lived under a rock, you have heard about Zoom, right? A video chatting app that was started in 2011 by a Chinese native in the United States and is headquartered in San Jose, California. It exploded in popularity during the COVID era, where it went from obscurity to a household name. Sounds like the quintessential American dream. However, behind the façade of a Silicon Valley success story, Zoom has some very disturbing secrets. The app has already been banned by NASA, SpaceX, Taiwan, New York City’s Department of Education, and many more. You may ask why?

Zoom capitulates to Chinese censorships

So why is Zoom Controversial?

In 2020, US federal prosecutors launched an investigation about Zoom Executives working with the Chinese Communists party to “surveil users and suppress video calls.” Justice Department hinted that the Americans’ accounts who were doing a video call about the 1989 Tianmen massacre were terminated. This does not sound appetizing, does it? Imagine being an American, performing a zoom call from the US to advocate for democracy only for your account to get terminated. What was Zoom’s response to these allegations you may ask? We “..will no longer allow requests from the Chinese government to affect users outside mainland China,” said the company.

In another case, in 2021, Zoom agreed to pay $86m for a lawsuit in the US for shortcomings in its security practices. This lawsuit was brought up against Zoom due to “Zoombombing,” which happens when a hacker enters a Zoom meeting to create trouble. At this point, you may wonder that surely Zoom has changed and updated its privacy policies due to these shortcomings.

Should you still use Zoom in 2022? 

Red Flags in Zoom’s Privacy Policy

Like most tech companies, Zoom collects both active and passive data on its users and use them for marketing purposes. Nothing out of ordinary here in comparison to other tech companies. However, they also collect Meeting and Messaging content. In other words, any and all content generated in a meeting is being collected, such as audio, video, messages, and chats. Basically, everything you say is being stored passively. Shocking, right? Wrong! Here is the kicker. Under their “Legal Reasons” section, Zoom says that they will share personal information with government agencies if required. Keep in mind that the CCP is a government agency. At this point, you may argue that your data may not be stored in China, so, therefore, the CCP will not have access to your data. Wrong! In 2020, Zoom admitted that “calls got mistakenly routed through China.” Conveniently, Zoom did not say how many users were affected. Even more conveniently, all companies in China are required by law to give the CCP access to user data. So what guarantees us that the CCP did not store users’ data while it was being routed through China? Oh and to add the cherry on top, Zoom of course has “taken” precautions for this mistake not to happen again.

Despite, all these red flags, unfortunately, most academic institutions, including prestigious universities like Berkeley, and American companies, still, primarily use Zoom as their main source of communication. This needs to stop. I recommend that Zoom be forced to sell its American branch to another American company or sever its ties completely with its Chinese branch.

References

Bloomberg. (n.d.). Bloomberg.com. Retrieved June 16, 2022, from https://www.bloomberg.com/billionaires/profiles/eric-s-yuan/#:~:text=Eric%20Yuan%20was%20born%20on,University%20of%20Mining%20and%20Technology.

Video conferencing, web conferencing, webinars, screen sharing. Zoom. (2022, April 19). Retrieved June 16, 2022, from https://explore.zoom.us/en/about/

Vigliarolo, B., Staff, T. R., Wolber, A., Whitney, L., Pernet, C., Alexander, M., & Combs, V. (2020, April 9). Who has banned zoom? Google, NASA, and more. TechRepublic. Retrieved June 16, 2022, from https://www.techrepublic.com/article/who-has-banned-zoom-google-nasa-and-more/

Harwell, D., & Nakashima, E. (2020, December 19). Federal prosecutors accuse Zoom Executive of working with Chinese government to surveil users and suppress video calls. The Washington Post. Retrieved June 16, 2022, from https://www.washingtonpost.com/technology/2020/12/18/zoom-helped-china-surveillance/

BBC. (2021, August 1). Zoom settles US class action privacy lawsuit for $86M. BBC News. Retrieved June 16, 2022, from https://www.bbc.com/news/business-58050391

Privacy. Zoom. (2022, April 5). Retrieved June 16, 2022, from https://explore.zoom.us/en/privacy/

Wood, C. (2020, April 6). Zoom admits calls got ‘mistakenly’ routed through China. Business Insider. Retrieved June 16, 2022, from https://www.businessinsider.com/china-zoom-data-2020-4

Bradley A. Thayer, opinion contributor. (2021, January 7). For Chinese firms, theft of your data is now a legal requirement. The Hill. Retrieved June 16, 2022, from https://thehill.com/opinion/cybersecurity/532583-for-chinese-firms-theft-of-your-data-is-now-a-legal-requirement/

Image credit:

https://www.salon.com/2020/06/15/zoom-capitulates-to-chinese-censorship-shutting-down-activists-accounts/

Josh, it, N. buying, Summers, J., Anon, Ruelas, R., Andy, Pedro, P., mchardy, C., Thom, Ray, & E, A. (2022, February 3). Should you still use Zoom in 2022? (hint: Security is not an issue anymore). All Things Secured. Retrieved June 16, 2022, from https://www.allthingssecured.com/tips/stop-using-zoom/