The Customer Is Always Right: No Ethics in Algorithms Without Consumer Support

The Customer Is Always Right: No Ethics in Algorithms Without Consumer Support
by Matt Swan | December 4, 2018

There is a something missing in data science today: ethics. It seems like there is a new scandal everyday; more personal data leaked to any number of bad actors in the greatest quantities possible. Big Data has quickly given way to Big Data Theft.

The Internet Society of France, for example, a public interest group advocating for online rights, is pushing Facebook to fix the problems that led to the recent string of violations. They’re suing for $100 million Euros (~$113 million USD) and threatening EU-based group action, if appropriate remedies are not made. Facebook is also being pursued by a public interest group in Ireland and recently paid a fine of 500,000 pounds (~$649,000 USD) for their role in the Cambridge Analytica breach. Is this the new normal?

Before we answer that question, it might be more prudent to ask why this happened in the first place. That answer is simple.

Dollars dictate ethics.

Facebook’s primary use of our data is to offer highly targeted (read: effective) advertising. Ads are the price of admission and it seems we’ve all come to terms with that. Amid all the scandals and breaches, Facebook made their money – far more money than they paid in fines. And they did it without any trace of ethical introspection. Move fast and break things, so long as they’re not your things.

Dollars dictate ethics.

Someone should be more concerned about this. In the recent hearings in the US Congress in early September, there was talk about regulating the tech industry to try to bring these problems under control. This feels like an encouraging move in the correct direction. It isn’t.

First, laws cannot enforce ethical behavior. Laws can put in place measures to reduce the likelihood of breaches or punish those not sufficiently safeguarding personal data or those failing to correct algorithms with a measurable bias, but it cannot require a company to have a Data Ethicist on the payroll. We’ve already noted that Facebook made more money than they paid in fines, so what motivation do they have to change their behavior?

Second, members of Congress are more likely to believe TensorFlow is a new setting on their Keurig than they are to know it’s an open source machine learning framework. Because of this reality, some organizations – such as 314 Action – exist and prioritize electing more STEM professionals to government because of technology has progressed quickly and government is out of touch. We need individuals who have a thorough understanding of technological methods.

Meanwhile, higher education is making an effort to import ethics into computer and data science programs, but there are still limitations. Some programs, such as UC Berkeley’s MIDS program, have implemented an ethics course. However, at the time of this writing, no program includes a course in ethics as a graduation requirement.

Dollars dictate ethics.

Consider the time constraints; only so many courses can be taken. If one program requires an ethics course, the programs that do not will be at an advantage in recruiting because they will argue the ethics course is a lost opportunity to squeeze in one more technology course. This will resonate with prospective students since there are no Data Ethicist jobs waiting for them and they’d prefer to load up on technology-oriented courses.

Also, taking an ethics course does not make one ethical. Ultimately, while each budding data scientist should be forced to consider the effects of his or her actions, it is certainly no guarantee of future ethical behavior.

If companies aren’t motivated to pursue ethics themselves and the government can’t force them to be ethical and schools can’t force us to be ethical, how can we possibly ensure the inclusion of ethics in data science?

I’ve provided the answer three times. If it were “ruby slippers”, we’d be home by now.

Dollars dictate ethics.

All the dollars start with consumers. And it turns out that when consumers collectively flex their economic muscles, companies bend and things break. Literally.

In late 2017, Fox News anchor Sean Hannity had made some questionable comments regarding a candidate for an Alabama senate seat. Consumers contacted Keurig, whose commercials aired during Hannity’s show, and complained. Keurig worked with Fox to ensure their ads would no longer be shown at those times, which also resulted in the untimely death of a number of Keurig machines.

The point is this: if we want to effect swift and enduring change within tech companies, the most effective way to do that is through consistent and persistent consumer influence. If we financially support companies that consider the ethical implications of their algorithms, or simply avoid those that don’t, we can create the necessary motivation for them to take it seriously.

But if we keep learning about the newest Facebook scandal from our Facebook feeds, we shouldn’t expect anymore more than the same “ask for forgiveness, not permission” attitude we’ve been getting all along.

Sources:
https://www.siliconrepublic.com/companies/data-science-ethicist-future 
https://www.siliconrepublic.com/enterprise/facebook-twitter-congress
https://www.siliconrepublic.com/careers/data-scientists-ethics
https://news.bloomberglaw.com/privacy-and-data-security/facebook-may-face-100m-euro-lawsuit-over-privacy-breach
https://www.nytimes.com/2017/11/13/business/media/keurig-hannity.html
http://www.pewinternet.org/2018/11/16/public-attitudes-toward-computer-algorithms/

Leave a Reply