Archive for March 19th, 2018

How many of us woke up to the news of a 1.5% dip in the stock market today? This is primarily due to the outfall of Cambridge Analytica’s illicit use of profile data from Facebook. Of course, the illegality, as far as Facebook is concerned is for holding data that Cambridge Analytica said they had voluntarily removed from their servers years before. The current fallout to Facebook (down 7% today) is not for the potentially catastrophic end use of that data if proven to have been used in electioneering, which Cambridge Alaytica is under investigation in the UK for swinging the Brexit vote as well as in the US for helping elect Trump, who paid handsomely ($6M) to get access to their user-profile centered analyses.

Admittedly, with #deletefacebook trending up a storm on Twitter (of all places), there is a little bit of schadenfreude aimed at greedy Facebook ad executives baked into that 400 point drop in the Dow, but at its heart is an international call for better regulation of the deeply personal data that is housed and sold by Facebook and other tech giants. In this instance, the data policies that are in the limelight are two of the most problematic for Facebook: third party sharing/housing of data, and using research as a means for data acquisition. The research use of Facebook data is definitely tarnished

The market volatility and the fact that Facebook actually lost daily users last quarter in the US, some of which was attributable to data privacy concerns from their user base, highlights the need for more secure third party data use policies. These are exactly the reason why, even if you delete your profile, the data can live on (indefinitely) on the servers of third party vendors without known/feasible recourse by the Facebook users to demand the deletion of this data. And their privacy policy makes this clear, though it is a difficult read to figure that out.

Facebook’s outsized market value is based in a great part on their ability to aggregate their users’ personal data and freely sell it as desired. The European Union’s upcoming May 25th deadline to implement the General Dat Protection Regulations is likely to help push the needle towards more control of data deletion and usage by third parties in Europe, and it is exactly the specter of potentially farther reaching regulation about data usage that dragged down the market today and will ultimately lower Facebook’s value if more regulation comes about. The big question is whether Facebook and other large data acquiring companies will be able to balance their voracious profit motive and inherent need to sell our data with the ability to help protect our privacy, and/or whether heavy handed government tactics can achieve that second goal for them?

Seeing Through the Fog

March 19th, 2018

Welcome to the AFOG Blog! We will use this space to post what we hope are accessible and provocative think pieces and reactions to academic research and news stories. Posts about what? Allow us to use this initial blog post to answer that question and introduce ourselves.

Algorithms and computational tools/systems, particularly as applied to artificial intelligence and machine learning, are increasingly being used by firms and governments in domains of socially consequential classification and decision-making. But their construction, application, and consequences are raising new concerns over issues of fairness, bias, transparency, interpretability, and accountability. The development of approaches or solutions to address these challenges are still nascent. And they require attention from more than just technologists and engineers, as they are playing out in domains of longstanding interest to social scientists and scholars of media, law, and policy, including social equality, civil rights, labor and automation, and the evolution of the news media.

In the fall of 2017, Professors Jenna Burrell and Deirdre Mulligan at the UC Berkeley School of Information began the Algorithmic Fairness and Opacity Group (AFOG), a working group that brings together UC Berkeley faculty, postdocs, and graduate students to develop new ideas, research directions, and policy recommendations around these topics. We take an interdisciplinary approach to our research, with members based at a variety of schools and departments across campus. These include UC Berkeley’s School of Information, Boalt Hall School of Law, Haas School of Business, the Goldman School of Public Policy, the departments of Electrical Engineering and Computer Sciences (EECS) and Sociology, the Berkeley Institute of Data Science (BIDS), the Center for Science, Technology, Medicine & Society (CSTMS), and the Center for Technology, Society & Policy (CTSP).

We meet roughly biweekly at the School of Information for informal discussions, presentations, and workshops. We also host a speaker series that brings experts from academia and the technology industry to campus to give public talks and take part in interdisciplinary conversations. AFOG is supported by UC Berkeley’s School of Information and a grant from Google Trust and Safety.

Below is a sampling of some of the questions that we seek to address:

  • How do trends in data-collection and algorithmic classification relate to the restructuring of life chances, opportunities, and ultimately the social mobility of individuals and groups in society?
  • How does an algorithmically informed mass media and social media shape the stability of our democracy?
  • How can we design user interfaces for machine-learning systems that will support user understanding, empowered decision-making, and human autonomy?
  • What tools and techniques are emerging that offer ways to mitigate transparency and/or fairness problems?
  • Which methods are best suited to particular domains of application?
  • How can we identify and transcend differences across disciplines in order to make progress on issues of algorithmic opacity and fairness?

Look for more from us on the AFOG Blog in the weeks and months to come!