The Facebook Whistleblower and the Moral Dilemma

The Facebook Whistleblower and the Moral Dilemma
By Anonymous | October 8, 2021

In a span of three days, Facebook broke the internet literally and figuratively. On October 4, 2021, three of the most popular platforms globally – Facebook, Instagram, and WhatsApp – were offline for multiple hours. A 60 Minutes episode aired the night before, interviewing data scientist and former Facebook product manager France Haugen. She claimed that Facebook and its products harm children and stoke division with hate, violence, and misinformation. She then testified to Congress on October 5, 2021, mainly blaming Facebook’s algorithm and platform design for these issues.

Who is Frances Haugen and What Exactly Does She Claim?

Now famously known as the “Facebook Whistleblower,” Frances Haugen was hired in 2019 to work as a product manager for the Civic Integrity team, a team to tackle misinformation and hate speech. When this team was dissolved a month after the 2020 U.S. Election, she started to see the company’s harmful effects and immoral compass. Before leaving Facebook in May 2021, she retrieved thousands of internal research documents that she used in her claims in the 60 Minutes interview and congressional testimony. With her evidence, Haugen claimed the platform’s algorithms and engagement-based ranking system harm societies worldwide, and that leadership knew about this but did not act on it. In addition, she provided research stating that the algorithms majorly impact children and teens. For example, children could start looking for healthy recipes on Instagram and end up on pro-anorexia content, causing them to feel more depressed. Other research suggests that Facebook’s algorithms have led European countries to adopt more extreme policymaking and to cause ethnic violence around the world, like Myanmar’s military using the platform to launch a genocide campaign.

How do Facebook’s Algorithms and Ranking System Work?

Facebook’s machine learning algorithms and engagement-based ranking system aim to have personalized content by using information such as clicking on advertisements and liking/sharing posts. The algorithm takes these data points to predict what posts and advertisements users might also be interested in. But when platforms blend content personalization and algorithmic amplification, “they create uncontrollable, attention-sucking beasts,” leading to perpetuating biases and affecting societies in ways barely understood by their creators. But in this specific case, Facebook’s leadership knew of the harmful effects and did not act on making the platform a safer place for their financial gain. The algorithm rewards posts that entice the most extreme emotions (often anger, rage, or fear) because it is designed to keep users on the platform for as long as possible, no matter how it makes them feel or what it makes them think. The longer a user is on the platform, the more likely they will click on ads, leading to more revenue for the company.

What Protects Facebook from Legal Action?

Section 230 of the U.S. Communications Decency Act, passed in 1996, prevents online platforms from being responsible for any third-party content being shared on the platforms. Haugen’s proposal to Congress is reforming section 230 around algorithmic ranking, so online platforms like Facebook will be held responsible for their decisions and actions around personalized algorithmic amplification.

The Moral Dilemma: What Would You Do?

A moral dilemma is a “conflict situation in which the choice one makes causes moral harm, which cannot be easily repaired if at all.” There is a long history of companies choosing profit over safety (I tried to narrow down some significant examples, but there are too many to list). With more and more companies using data science for decision-making, data scientists often get caught in the middle of a moral dilemma: doing what they’re told by leadership, possibly knowing the harm that will come from these decisions from leadership. It is hard to predict what someone would do if they were in Haugen’s position. We generally aim to do the right thing, but it becomes a more complicated question when you jeopardize your source of income that provides for yourself and your family.

We as data scientists may face dilemmas like Frances did, where the work we do and the companies we work for might not have the best moral compass. Facebook has approximately 2.9 billion monthly active users – 60 percent of all internet-connected people on the planet – and Frances Haugen spoke up about the unethical practices taking place on the platform. Not many people could have done what she did, and I applaud her for standing up to one of the largest companies and platforms in the world.

References:

https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/

https://time.com/6104157/facebook-testimony-teens-algorithm/

https://embassy.science/wiki/Theme:17d406f9-0b0f-4325-aa2d-2fe186d5ff34

https://www.nytimes.com/2021/10/06/opinion/facebook-whistleblower-section-230.html