Archive for September, 2018

Why Do Defaults Affect Behavior? Experimental Evidence from Afghanistan (by Joshua Blumenstock, Michael Callen, and Tarek Ghani) was published in the American Economic Review.

CTSP Alumni Updates

September 27th, 2018

We’re thrilled to highlight some recent updates from our fellows:

Gracen Brilmyer, now a PhD student at UCLA, has published a single authored work in one of the leading journals in archival studies, Archival Science: “Archival Assemblages: Applying Disability Studies’ Political/Relational Model to Archival Description” and presented their work on archives, disability, and justice at a number of events over the past two years, including The Archival Education and Research Initiative (AERI), the Allied Media Conference, the International Communications Association (ICA) Preconference, Disability as Spectacle, and their research will be presented at the upcoming Community Informatics Research Network (CIRN).

CTSP Funded Project 2016: Vision Archive


Originating in the 2017 project “Assessing Race and Income Disparities in Crowdsourced Safety Data Collection” done by Fellows Kate Beck, Aditya Medury, and Jesus Barajas, the Safe Transportation and Research Center will launch a new project, Street Story, in October 2018. Street Story is an online platform that allows community groups and agencies to collect community input about transportation collisions, near-misses, general hazards and safe locations to travel. The platform will be available throughout California and is funded through the California Office of Traffic Safety.

CTSP Funded Project 2017: Assessing Race and Income Disparities in Crowdsourced Safety Data Collection


Fellow Roel Dobbe has begun a postdoctoral scholar position at the new AI Now Institute. Inspired by his 2018 CTSP project, he has co-authored a position paper with Sarah Dean, Tom Gilbert and Nitin Kohli titled A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics.

CTSP Funded Project 2018: Unpacking the Black Box of Machine Learning Processes


We are also looking forward to a CTSP Fellow filled Computer Supported Cooperative Work conference in November this year! CTSP affiliated papers include:

We also look forward to seeing CTSP affiliates presenting other work, including 2018 Fellows Richmond Wong, Noura Howell, Sarah Fox, and more!

 

The Doctor Will Model You Now:
The Rise and Risks of Artificial Intelligence in Healthcare
by Dani Salah | September 23, 2018

As artificial intelligence promises to change many facets of our everyday lives, it will have perhaps no greater change than in the healthcare industry. The increase in data collected on individuals has already proven paramount in improving health outcomes, and in this infrstructure AI has a natural fit. Computing power capabilities that are orders of magnitude greater than could have been imagined just decades ago, along with increasingly complex models that can learn as well as if not better than humans, have already changed healthcare capabilities worldwide.

While the applications for AI today and in the near future hold tremendous potential, there are ethical concerns that must be considered each step of the way. Doctors, whose oaths swear them to withhold the ethical standards of medicine, may soon be pledging to abide by ethical standards of data.

Promising Applications


AI Along Patient Experience

AI has begun demonstrating its potential to touch individuals at every point of their patient experiences. From diagnosis to surgical visit to recovery, AI and robotics can improve the efficiency and accuracy of a patient’s entire health experience. The British healthcare startup Babylon Health has this year demonstrated its product could assign correct diagnosis 75% of the time, which was 5% more accurate than the average prediction rate for human physicians. They have since raised $60 million in funding and plan to expand their algorithmic service to chatbots next (Locker, M.).

Radiology and pathology are in line for major operational shifts as AI brings new capabilities to medical imaging (G. Zaharchuk). Although nascent in its testing, results have been promising. Prediction accuracy rates often surpass human abilities here as well, and one deep learning network accurately identified breast cancer cells in 100% of the images it scanned (Bresnick, 2018).


AI in medical imaging

But some areas of AI’s potential have less to do with replacing doctors and more to do with making their jobs easiers. All medical centers today are concerned with burnout, which is driving professionals across the patient experience to reduce hours, retire early or switch out of the medical career. A leading cause of this burnout is administrative requirements that don’t provide the job satisfaction many get from working in medicine. By some accounts, physicians today spend more than two thirds of their time at work handling paperwork. Increasingly sophisticated data infrastructure in the healthcare industry has done little to change the amount of time required to document a patient’s health records; it has only changed the form in which that data is documented. But automating more of such required tasks could mean that physicians have more time to spend with their patients, increasing their satisfaction in their work, and even boost accuracy of data collection and documentation (Bresnick, 2018).

Significant Risks

The many improvements that AI experts promise for healthcare operations certainly do not come without costs. Taking advantage of these advanced analytics possibilities requires feeding the models data – lots and lots of data. With the collection and storage of vast amounts of patient data comes the potential for data breaches, misuse of data, incorrect interpretations, and automated biases. Healthcare perhaps more than other industries necessitates that patient data is held private and secure. In fact, the emerging frameworks with which to evaluate and protect population data must be made even more stringent when the data represents extremely telling details of the health and wellness of individual people.

Works Cited

Bresnick, J. (2018). Arguing the Pros and Cons of Artificial Intelligence in Healthcare. [online] Health IT Analytics. Available at: https://healthitanalytics.com/news/arguing-the-pros-and-cons-of-artificial-intelligence-in-healthcare [Accessed 24 Sep. 2018].

Bresnick, J. (2018). Deep Learning Network 100% Accurate at Identifying Breast Cancer. [online] Health IT Analytics. Available at: https://healthitanalytics.com/news/deep-learning-network-100-accurate-at-identifying-breast-cancer [Accessed 24 Sep. 2018].

G. Zaharchuk, E. Gong, M. Wintermark, D. Rubin, C.P. Langlotz (2018). Deep Learning in Neuroradiology. American Journal of Neuroradiology. [online] Available at: http://www.ajnr.org/content/early/2018/02/01/ajnr.A5543 [Accessed 24 Sep. 2018].

Locker, M. (2018). Fast Company. [online] Available at: https://www.fastcompany.com/40590682/this-digital-health-startup-hopes-your-next-doctor-will-be-an-ai [Accessed 24 Sep. 2018].

Healthcare’s Painful: Is HIPAA to blame?
By Keith LoMurray on 9/23/2018

Working in healthcare technology, I regularly hear painful stories about people’s experiences with healthcare. People will often talk about the great care they received from an individual doctor or nurse, but then mention the challenges of navigating the impersonal healthcare bureaucracy. Coordinating care between clinics, waiting for records, signing the same forms multiple times, scheduling appointments, other tasks that for healthcare organizations do as a normal course of business, often seem unnecessarily burdensome to a patient already dealing with an illness.

Over time you pick up nuggets of information about health care such as the fact that most medical records are still transferred via fax machine or that only one in three hospitals can send and receive medical records for care that happened outside their organization. HIPAA, the Health Insurance Portability and Accountability Act, has a central role in healthcare, which dutifully guards the patient’s right to privacy. HIPAA sets standard for how and when healthcare information can be shared and sets severe penalties for violations. Transferring medical records by individual faxes seems antiquated, but draws the question: how much is HIPAA responsible for this painful process?


Image credit: Byrd Pinkerton/Vox

Healthcare technology requires additional effort compared to other industries, which results in slower and more expensive processes. A common challenge is around using a web analytics tool for tracking the usage of a website. The two largest web analytics vendors are Google Analytics and Adobe Analytics, which both advertise the simplicity of adding analytic tracking to a website. Both companies leverage the IP address of the website visit to set metrics, a method that is not compliant with HIPAA. Neither vendor supports a configuration that doesn’t read the IP address. As a result, healthcare companies using an analytics solution will either need to find a complaint tool, which presents it’s own challenges, or implement workarounds to Google or Adobe Analytics for HIPAA compliance. Performing this work may be in the best interest of patient privacy, but it requires healthcare companies expend additional resources, time and energy.

Another challenge for a healthcare organization is the need for a “business associate agreement” (BAA) with all vendors that handle medical information. BAA’s are contracts for vendors that specify the safeguards around medical information as well as the liability of each partner involved. In many cases a vendor can’t be used, because they won’t sign a BAA. BAA’s also require determining who is liable for violations and responsible for penalties. This is a good principle, but it is slow, and requires companies to accept the liabilities. Often organizations will decide to avoid explicit risks. Instead organizations will opt to remain on legacy technology, which still have hidden risks but without the explicit assertions of liability within a BAA.

As much as requirements within HIPAA slow healthcare companies and make processes more expensive, healthcare companies also make choices that contribute to the problems in healthcare. Interoperability, the ability to share healthcare records across healthcare organizations, has been a goal of the US government since at least the 2009 HITECH Act that included digital health records and interoperability as core standards.

Many healthcare organizations have adopted digital health records, but interoperability progress has been more limited. Sharing medical data in a secure manner is already complicated, but interoperability is not prioritized for other reasons. There is no incentive to share records when a patient is switching healthcare providers. For hospital systems, reducing the burden of medical record sharing could make it easier to lose a customer to a competitor. HIPAA allows sharing of records across organizations for a patient care. So, the lack of interoperability can’t be totally blamed on HIPAA.

There are many challenges with HIPAA and in many situations it makes healthcare companies move slower and become more risk averse compared to other technology companies. But it also makes healthcare technology companies think explicitly about the risks they take and prioritize strategies to protect a person’s medical data. A challenge is other industries haven’t historically prioritized the protection and securing of a person’s data to the degree HIPAA requires. When protecting personal data healthcare companies are at the forefront, building the technology to support these standards. Compare this to when healthcare companies use technologies such as cloud computing, they are leveraging a second wave technology, which was refined in another area. Perhaps if more companies prioritized protecting user data, they could help healthcare companies fix some of the unnecessary burdens in healthcare that cause patients additional heartache.

Been There, Done That: What Data Science Can Learn from Psychology

by Kim Darnell on September 20, 2018

In the wake of recent revelations regarding misuses and abuses of personal data by a variety of well-known and successful companies, as well as the growing evidence that big data are being used to actively perpetuate and increase socioeconomic inequality, it might seem like data science as a discipline has wandered so far down a dark ethical path that there is no clear map to recovery.


Image credit: Getty Images

As a professor of psychology and a data scientist, however, I see something very different: A young field that is still trying to figure out how to maximize its potential in a fast-paced, dynamic world while still following a stable, practical moral compass. Psychology was there once, too, performing highly controversial studies, such as the Milgram experiment, which showed everyday Americans that, like their counterparts in Nazi Germany, they would engage in the potentially life-threatening torture of strangers if instructed to do so by an authority figure. Or the Stanford prison experiment, which revealed that even the most privileged among us can become predators or prey at the flip of a coin when placed in a prison environment.


Image credit: Yale University Manuscript and Archives

Today, data scientists and those who employ them are struggling publicly, if not painfully, to find the right balance between getting the data they want while respecting the rights of those they get the data from. Fortunately, psychology can offer a detailed and time-tested framework for making that struggle less difficult.

In the United States, any licensed psychologist or employee of a training program approved by the American Psychological Association (APA) is bound by the Ethical Principles of Psychologists and Code of Conduct, also known as the APA Code of Ethics. This code is centered on five principles that are intended to “guide and inspire … toward the very highest ethical ideals of the profession.” They include A) Beneficence and Nonmaleficence, B) Fidelity and Responsibility, C) Integrity, D) Justice, and E) Respect for People’s Rights and Dignity. Taken together, these principles and the rules they give rise to govern psychologists’ behavior in all areas of professional practice (e.g., therapy, research, education, public service) and describe how we must:

  • resolve conflicts of interest among our domains of practice, and that practice with the law;
  • interact with our colleagues and clients in a way that guarantees they understand what we are doing, why we are doing it, and what we think the consequences of our collective actions might be;
  • define the limits of our professional competence, as well as that of our colleagues and clients;
  • and address any mistrust or harmful effects that arise from our professional conduct, and do so in a meaningful and timely fashion.


Image credit: American Psychological Association

Of course, psychology’s model is not the only plausible source for a data science code of ethics. Data for Democracy, for example, has attempted to crowdsource a code of ethical conduct from the data science community itself, an effort supported by former U.S. Chief Data Scientist and data ethics evangelist, D.J. Patil. Others have proposed a data science code of ethics based on the Hippocratic Oath or the code of ethics for the National Association of Social Workers. Each of these approaches has their strengths and weaknesses, but none seems to offer the comprehensive perspective that the APA Code of Conduct does.

However we ultimately choose to resolve the crafting of a data science code of ethics, there are a few things we can be sure of. First, we as data scientists need to break the bad habit of asking for forgiveness rather than permission. If we don’t, the general public will become so mistrustful of us that they refuse to provide us with the honest and representative data we need to do our jobs well. Second, we need to avoid falling prey to the entropy of procrastination. Otherwise, we will find our own code of ethics defined for us piecemeal by various government entities, the majority of which have members who know far less about the ethics of human subjects research and data science technology than they do about their current polling numbers and chances for re-election.

Psychology as a discipline runs the gamut from social science to biological science, and thus has constructed its code of ethical conduct to function effectively in diverse intellectual, cultural, and professional contexts. Given that data science is facing a comparably Herculean but highly related task, it seems both reasonable and efficient for our young discipline to take advantage of the insight that psychology can offer and base our own code of ethics on its well-validated model.

 

Immanuel Kant, Jean-Jacques Rousseau, and Mark Zuckerberg Walk Into a Bar …
A Philosophy Beginner’s Analysis of the Facebook/Cambridge Analytica Scandal

My best friend in high school majored in philosophy in college; I majored in computer science. “It’s pointless to debate philosophy,” I would say, “You can’t deduce one true answer and it’s not going to be applicable in the real world!” She would try to explain things like existentialism and phenomenology to me while the only philosophers I could name were the ones that had mathematical theorems named after them. As I made my way into my master’s program in data science, I found myself suddenly surrounded by moral philosophy. It seems like every day there is a new story about ethical quandaries surrounding the value, privacy, collection, and retention of information.

Facebook has frequently been in the ethical hot seat with its sometimes precarious handling of user data. Most recently, Facebook allowed Cambridge Analytica to collect personally identifiable information of millions of users for the purpose of influencing voter opinions on behalf of politicians paying for their services. (Newcomb, 2018). Cambridge Analytica misrepresented themselves to use Facebook data for nefarious purposes. The ethical question here is: Is Facebook responsible for the outcomes of the apps and services that it provides user data to?

I wonder what the philosophers my friend talked about would have to say about Facebook’s behavior.


Groening Philosophers

Immanuel Kant believes that categorical imperative is the basis principle to determine whether one’s action is deemed to be ethically correct. He has proposed three versions of this:

1. Something is considered ethically correct if it can be made into universal law.
2. A person should be treated as an end and not the means to achieve an end.
3. Each individual should act as a member of an ideal kingdom where he or she is both the ruler and subject at the same time.

Kant attributes ethical goodness or badness to the action itself, not the outcome of the action. I think in this sense, Kant would find Facebook in the wrong because, even though Facebook didn’t know that Cambridge Analytica was out to do bad things, Facebook knew how much information it was making available to all apps and historically how their users responded to having their privacy invaded. Whether or not apps were exploiting user data (the outcome), Facebook making that available was still unethical. They used their users as a means to an end. Mark Zuckerburg is a Facebook user, but I wonder if he deliberates both as the king of the Facebook kingdom, and as a subject.


Social Contract

I think Jean-Jaques Rousseau might have a slightly different take on it. He garnered a lot of recognition for his written work on political philosophy, The Social Contract. Sounds awfully close to The Social Network, though Rousseau’s wasn’t nominated for multiple Academy Awards. Rosseau believes in a social contract between a governing body and its people and claims that people trade some individual rights for the benefits provided to society as a whole. (Delaney). So, as I think he might see it, Facebook could represent a governing body. Facebook offers its users a global platform to correspond with other users and, if they choose to participate, they give up some of their individual rights to that information in exchange for the access that they wouldn’t otherwise have without Facebook. Even though this might seem to be in Facebook’s favor, Rousseau also believed that laws are binding only when they are supported by the people. Through this lens, Facebook failing to be governed by the wishes of its users breaks the contract. I think Rousseau would agree with Kant even though they would arrive at it in different ways.

Maybe Rousseau and Kant wouldn’t have seen it that way at all. Everyone has a different opinion and problems don’t always fit neatly into the “good” or “bad” box. Socrates (or Plato depending on who you ask) had a much simpler view. He said that there is only one good: knowledge, and one evil: ignorance. (Ambury). I think he would see Facebook’s greatest offense as it was: obliviousness to the potential negative outcomes that were exploited by Cambridge Analytica. Undergraduate-me was right about one thing: there isn’t one true answer. It’s arguable how much culpability Facebook has for the outcomes in the Cambridge Analytica scandal, but many users (and probably philosophers) agree that Facebook acted unethically. Undergraduate-me was wrong too, though. The discussion about these issues is critical and we need to continue to have them to ensure that we hold others accountable for their actions.

Works Cited

Ambury, J. (n.d.). Socrates (469 – 399 B.C.E). Retrieved September 21, 2018, from https://www.iep.utm.edu/socrates/

Delaney, J. (n.d.). Jean-Jacques Rousseau (1712 – 1778). Retrieved September 21, 2018, from https://www.iep.utm.edu/rousseau/

Delaney, J. (n.d.). [Jean-Jacques Rousseau]. Retrieved September 20, 2018, from https://www.iep.utm.edu/rousseau/

Ellerton, P. (n.d.). [Groening Philosophers]. Retrieved September 20, 2018, from https://pactiss.org/wp-content/uploads/2011/09/SP.jpg

Loaiza, A. (2017, December 20). [The Social Network]. Retrieved September 20, 2018, from https://theimpactnews.com/columnists/the-new-girl-on-the-block/2017/12/20/reaction-to-the-social-network/

Newcomb, A. (2018, March 24). A timeline of Facebook’s privacy issues — and its responses. Retrieved September 21, 2018, from https://www.nbcnews.com/tech/social-media/timeline-facebook-s-privacy-issues-its-responses-n859651

Thursday, October 25, 5-7pm, followed by reception

UC Berkeley, South Hall Room 210

Open to the public!

RSVP is required.

Understanding how to protect your personal digital security is more important than ever. Confused about two factor authentication options? Which messaging app is the most secure? What happens if you forget your password manager password, or lose the phone you use for 2 factor authentication? How do you keep your private material from being shared or stolen? And how do you help your friends and family consider the potential dangers and work to prevent harm, especially given increased threats to vulnerable communities and unprecedented data breaches?

Whether you are concerned about snooping family and friends, bullies and exes who are out to hack and harass you, thieves who want to impersonate you and steal your funds, or government and corporate spying, we can help you with this fun, straightforward training in how to protect your information and communications.

Join us for a couple hours of discussion and hands-on set up. We’ll go over various scenarios you might want to protect against, talk about good tools and best practices, and explore trade offs between usability and security. This training is designed for people at all levels of expertise, and those who want both personal and professional digital security protection.

Refreshments and hardware keys provided! Bring your laptop or other digital device. Take home a hardware key and better digital security practices.

This crash course is sponsored by the Center for Technology, Society & Policy and generously funded by the Charles Koch Foundation. Jessy Irwin will be our facilitator and guide. Jessy is Head of Security at Tendermint, where she excels at translating complex cybersecurity problems into relatable terms, and is responsible for developing, maintaining and delivering comprehensive security strategy that supports and enables the needs of her organization and its people. Prior to her role at Tendermint, she worked to solve security obstacles for non-expert users as a strategic advisor, security executive and former Security Empress at 1Password. She regularly writes and presents about human-centric security, and believes that people should not have to become experts in technology, security or privacy to be safe online.

RSVP here!

Article published in Nature

September 22nd, 2018

Dr. Blumenstock’s article, “Don’t forget people in the use of big data for development,” was published in the journal Nature

Blumenstock receives Hellman Award

September 20th, 2018

Prof. Blumenstock was named as a 2018 Hellman Fellow for his project, “Evaluating Community Cellular Networks: How Does Mobile Connectivity Affect Isolated Communities?”

by Santiago Molina and Gordon PherriboCTSP Fellows

This is the first in a series of posts on the project “Democratizing” Technology: Expertise and Innovation in Genetic Engineering

When we think about who is making decisions that will impact the future health and wellbeing of society, one would hope that these individuals would wield their expertise in a way that addresses the social and economic issues affecting our communities. Scientists often fill this role: for example, an ecologist advising a state environmental committee on river water redistribution [1], a geologist consulting for an architectural team building a skyscraper [2], an oncologist discussing the best treatment options based on the patient’s diagnosis and values [3] or an economist brought in by a city government to help develop a strategy for allocating grants to elementary schools. Part of the general contract between technical experts and their democracies is that they inform relevant actors so that decisions are made with the strongest possible factual basis.

The three examples above describe scientists going outside of the boundaries of their disciplines to present for people outside of the scientific community “on stage” [4]. But what about decisions made by scientists behind the scenes about new technologies that could affect more than daily laboratory life? In the 1970s, genetic engineers used their technical expertise to make a call about an exciting new technology, recombinant DNA (rDNA). This technology allowed scientists to mix and add DNA from different organisms; later giving rise to engineered bacteria that could produce insulin and eventually transgenic crops. The expert decision making process and outcome, in this case, had little to do with the possibility of commercializing biotechnology or the economic impacts of GMO seed monopolies. This happened before the patenting of whole biological organisms [5], and the use of rDNA in plants in 1982. Instead, the emerging issues surrounding rDNA were dealt with as a technical issue of containment. Researchers wanted to ensure that anything tinkered with genetically stayed not just inside the lab, but inside specially marked and isolated rooms in the lab, eventually given rise to well-established institution of biosafety. A technical fix, for a technical issue.

Today, scientists are similarly engaged in a process of expert decision making around another exciting new technology, the CRISPR-Cas9 system. This technology allows scientists to make highly specific changes, “edits”, to the DNA of virtually any organism. Following the original publication that showed that CRISPR-Cas9 could be used to modify DNA in a “programmable” way, scientists have developed the system into a laboratory toolbox and laboratories across the life sciences are using it to tinker away at bacteria, butterflies, corn, frogs, fruit flies, human liver cells, nematodes, and many other organisms. Maybe because most people do not have strong feelings about nematodes, most of the attention in both popular news coverage and in expert circles about this technology has had to do with whether modifications that could affect human offspring (i.e. germline editing) are moral.  

We have been interviewing faculty members directly engaged in these critical conversations about the potential benefits and risks of new genome editing technologies. As we continue to analyze these interviews, we want to better understand the nature of these backstage conversations and learn how the experiences and professional development activities of these expects influenced their decision-making. In subsequent posts we’ll be sharing some of our findings from these interviews, which so far have highlighted the role of a wide range of technical experiences and skills for the individuals engaged in these discussions, the strength of personal social connections and reputation in getting you a seat at the table and the dynamic nature of expert decision making.

[1]  Scoville, C. (2017). “We Need Social Scientists!” The Allure and Assumptions of Economistic Optimization in Applied Environmental Science. Science as Culture, 26(4), 468-480.

[2] Wildermuth and Dineen (2017) “How ready will Bay Area be for next Quake?” SF Chronicle. Available online at: https://www.sfchronicle.com/news/article/How-ready-will-Bay-Area-be-for-next-big-quake-12216401.php

[3] Sprangers, M. A., & Aaronson, N. K. (1992). The role of health care providers and significant others in evaluating the quality of life of patients with chronic disease: a review. Journal of clinical epidemiology, 45(7), 743-760.

[4] Hilgartner, S. (2000). Science on stage: Expert advice as public drama. Stanford University Press.

[5] Diamond v Chakrabarty was in 1980, upheld first whole-scale organism patent (bacterium that could digest crude oil).