Content Integrity and the Dubious Ethics of Censorship
By Aidan Feay | September 30th, 2018
In the wake of Alex Jones’ exile from every social media platform, questions about censorship and content integrity are swirling around the web. Platforms like Jones’ Infowars propagate misinformation under the guise of genuine journalism while serving a distinctly more sinister agenda. Aided by the rise of native advertising and the blurring of lines between sponsored or otherwise nefariously motivated content and traditional editorial media, the general populace finds it increasingly difficult to distinguish between the two. Consumers are left in a space where truth is nebulous and the ethics of content production are constantly questioned. Platforms are forced to evaluate the ethics of censorship and balance profitability with public service.
At the heart of this crisis is the concept of fake news, which we can define as misinformation that imitates the form but not the content of editorial media. Whether it’s used to generate ad revenue or sway entire populations of consumers, fake news has found marked success on social media. The former is arguably less toxic but no less deceitful. As John Oliver so aptly put it in his 2014 piece on native advertising, publications are “like a camouflage manufacturer saying ‘only an idiot could not tell the difference between that man [gesturing to a camouflage advertisement] and foliage’.” There’s a generally accepted suggestion of integrity in all content that is allowed publication or propagation on a platform.
Despite the assumption that the digital age has advanced our access to information and thereby made us smarter, it has simultaneously improved the spread of misinformation. The barriers of entry to mainstream consumption are lower and the isolation of like-minded communities has created ideological echo chambers that feed confirmation bias in order to widen the political spectrum and reinforce extremist beliefs. According to the Pew Research Center, this gap has more than doubled over the past twenty years, making outlandish claims all the more palatable to general media consumers.
Platforms are stuck weighing attempts to bridge the gap and open up echo chambers with cries against censorship. On the pro-censorship side, arguments are made in favor of safety for the general public. Take the Comet Ping Pong scandal, for example, wherein absurd claims based on the John Podesta emails found fuel on 4chan and gained traction within far-right blogs, which propagated the allegations of a pedophilia ring as fact. These articles found purchase on Twitter and Reddit and ultimately led to an assailant armed with an assault rifle firing shots within the restaurant in an attempt to rescue the supposed victims. What started as a fringe theory found purchase and led to real-world violence.
The increasing pressure on platforms to prevent this sort of exacerbation has led media actors to partner with platforms in order to arrive at a solution. One such effort is the Journalism Trust Initiative, a global effort to create accountability standards for media orgations and develop a whitelisted group of outlets which would be implemented by social media platforms as a low-lift means of censoring harmful content.
On the other hand, strong arguments have been made against censorship. In the Twitter scandal of Beatrix von Storch, evidence can be found of legal pressure from the German government to promote certain behaviors by the platform’s maintainers. Similarly, Courtney Radsch of the Committee to Protect Journalists points out that authoritarian regimes have been the most eager to acknowledge, propagate and validate the concept of fake news within their nations. Egypt, China, and Turkey have jailed more than half of imprisoned journalists worldwide, illustrating the dangers of censorship to a society that otherwise enjoys a free press.
How can social media platforms ethically engage with the concept of censorship? While censorship can prevent violence amongst the population, it can also reinforce governmental bias and suppress free speech. For so long, platforms like Facebook tried to hide behind their terms of service in order to avoid the debate entirely. During the Infowars debacle, the Head of News Feed at Facebook said that they don’t “take down false news” and that “being false […] doesn’t violate the community standards.” Shortly after, they contorted the language of their Community Standards due to public pressure and cited their anti-violence clause in Infowars ban.
It seems then that platforms are only beholden to popular opinion and the actions of their peers (Facebook only banned InfoWars after YouTube, Stitcher and Spotify did). Corporate profitability will favor censorship as an extension of consumers so long as those with purchasing power remain ethically conscious and exert their power by choosing which platforms to utilize and passively fund via advertising.