Archive for December 11th, 2017

Based on your political views, your Facebook/Google/Twitter news feed probably looks quite a bit different from mine.  This is because of a process known as a “filter bubble”, in which news which comports to your world view is highlighted and news which conflicts with your world view is filtered out, painting a very lopsided picture in your news feed.  A filter bubble results from the confluence of two phenomena.  The first is known as an “echo chamber,” in which we seek out information to confirm what we already believe and disregard that which challenges those beliefs.  The second is social media recommender algorithms doing what recommender algorithms do — recommending content it thinks you’ll enjoy.   If you only read news of a certain political persuasion, eventually, your favorite social media site will only recommend news of a certain political persuasion.

Unfortunately, this has resulted in a breeding ground for fake news.  Unscrupulous content providers don’t care whether or not you know the information they peddle is false, so long as you click on it and share it with your social network (many members of which probably share your political views and will keep the fake news article propagating through the network).  The only barrier between fake news mongers and your news feed is the filter bubble you’ve created.  That same barrier, however, becomes an express lane simply by capitalizing on key words and phrases that you’ve already expressed an interest in.

Social media sites have done little to combat this barrage of fake news.  Their position on the matter is that it’s up to the user to decide what is fake and what is real.  In fact, Twitter relieves itself of any obligation to discern fact from fiction in its Terms of Service, stating that you “use or rely upon” anything you read there at your own risk.  Placing the onus of fact-checking on the users has led to real consequences such as “Pizzagate,” an incident in which a man, acting in response to fake news he had read on Facebook, fired an assault rifle in a pizzeria he believed was being used as a front by Hillary Clinton’s campaign manager, John Podesta, to traffic child sex slaves.

Clearly, placing the burden of verifying news on the users’ shoulders doesn’t work.  Many users suffer from information illiteracy — they aren’t equipped with the skills necessary to ascertain whether or not a news article has any grounding in reality.  They don’t know how to fact check a claim, or even question the expertise or motivation of someone going on the record as “an expert.”  And if the news article happens to align with their existing world view, many have little reason to question its authenticity.

Social media sites need to do more to combat fake news.  They’ve already been excoriated by Congressional committees over their part in the Russian meddling effort during the 2016 Presidential Election.  Facebook, Google, and Twitter have since pledged to find a solution to end fake news, and Twitter has suspended 45 accounts suspected of pushing pro-Russia propaganda into U.S. political discourse, but they are only addressing the issue now that they are facing scrutiny, and they are still dragging their feet about it.  Ultimately though, the incentive structures in place do little to encourage social media giants to change their ways.  Social media sites make the majority of their money through advertisements and sponsored content, so when a content provider offers large sums to ensure millions of people get their message, social media sites won’t ask questions until the fines for sponsoring misleading content offset any potential profit.

Open postdoc position!

December 11th, 2017

Want to do research at the intersection of machine learning and economic development? Click here for details. We will begin reviewing applications on January 15.