The Doctor Will Model You Now: The Rise and Risks of Artificial Intelligence in Healthcare

The Doctor Will Model You Now:
The Rise and Risks of Artificial Intelligence in Healthcare
by Dani Salah | September 23, 2018

As artificial intelligence promises to change many facets of our everyday lives, it will have perhaps no greater change than in the healthcare industry. The increase in data collected on individuals has already proven paramount in improving health outcomes, and in this infrstructure AI has a natural fit. Computing power capabilities that are orders of magnitude greater than could have been imagined just decades ago, along with increasingly complex models that can learn as well as if not better than humans, have already changed healthcare capabilities worldwide.

While the applications for AI today and in the near future hold tremendous potential, there are ethical concerns that must be considered each step of the way. Doctors, whose oaths swear them to withhold the ethical standards of medicine, may soon be pledging to abide by ethical standards of data.

Promising Applications


AI Along Patient Experience

AI has begun demonstrating its potential to touch individuals at every point of their patient experiences. From diagnosis to surgical visit to recovery, AI and robotics can improve the efficiency and accuracy of a patient’s entire health experience. The British healthcare startup Babylon Health has this year demonstrated its product could assign correct diagnosis 75% of the time, which was 5% more accurate than the average prediction rate for human physicians. They have since raised $60 million in funding and plan to expand their algorithmic service to chatbots next (Locker, M.).

Radiology and pathology are in line for major operational shifts as AI brings new capabilities to medical imaging (G. Zaharchuk). Although nascent in its testing, results have been promising. Prediction accuracy rates often surpass human abilities here as well, and one deep learning network accurately identified breast cancer cells in 100% of the images it scanned (Bresnick, 2018).


AI in medical imaging

But some areas of AI’s potential have less to do with replacing doctors and more to do with making their jobs easiers. All medical centers today are concerned with burnout, which is driving professionals across the patient experience to reduce hours, retire early or switch out of the medical career. A leading cause of this burnout is administrative requirements that don’t provide the job satisfaction many get from working in medicine. By some accounts, physicians today spend more than two thirds of their time at work handling paperwork. Increasingly sophisticated data infrastructure in the healthcare industry has done little to change the amount of time required to document a patient’s health records; it has only changed the form in which that data is documented. But automating more of such required tasks could mean that physicians have more time to spend with their patients, increasing their satisfaction in their work, and even boost accuracy of data collection and documentation (Bresnick, 2018).

Significant Risks

The many improvements that AI experts promise for healthcare operations certainly do not come without costs. Taking advantage of these advanced analytics possibilities requires feeding the models data – lots and lots of data. With the collection and storage of vast amounts of patient data comes the potential for data breaches, misuse of data, incorrect interpretations, and automated biases. Healthcare perhaps more than other industries necessitates that patient data is held private and secure. In fact, the emerging frameworks with which to evaluate and protect population data must be made even more stringent when the data represents extremely telling details of the health and wellness of individual people.

Works Cited

Bresnick, J. (2018). Arguing the Pros and Cons of Artificial Intelligence in Healthcare. [online] Health IT Analytics. Available at: https://healthitanalytics.com/news/arguing-the-pros-and-cons-of-artificial-intelligence-in-healthcare [Accessed 24 Sep. 2018].

Bresnick, J. (2018). Deep Learning Network 100% Accurate at Identifying Breast Cancer. [online] Health IT Analytics. Available at: https://healthitanalytics.com/news/deep-learning-network-100-accurate-at-identifying-breast-cancer [Accessed 24 Sep. 2018].

G. Zaharchuk, E. Gong, M. Wintermark, D. Rubin, C.P. Langlotz (2018). Deep Learning in Neuroradiology. American Journal of Neuroradiology. [online] Available at: http://www.ajnr.org/content/early/2018/02/01/ajnr.A5543 [Accessed 24 Sep. 2018].

Locker, M. (2018). Fast Company. [online] Available at: https://www.fastcompany.com/40590682/this-digital-health-startup-hopes-your-next-doctor-will-be-an-ai [Accessed 24 Sep. 2018].

Healthcare’s Painful: Is HIPAA to blame?

Healthcare’s Painful: Is HIPAA to blame?
By Keith LoMurray on 9/23/2018

Working in healthcare technology, I regularly hear painful stories about people’s experiences with healthcare. People will often talk about the great care they received from an individual doctor or nurse, but then mention the challenges of navigating the impersonal healthcare bureaucracy. Coordinating care between clinics, waiting for records, signing the same forms multiple times, scheduling appointments, other tasks that for healthcare organizations do as a normal course of business, often seem unnecessarily burdensome to a patient already dealing with an illness.

Over time you pick up nuggets of information about health care such as the fact that most medical records are still transferred via fax machine or that only one in three hospitals can send and receive medical records for care that happened outside their organization. HIPAA, the Health Insurance Portability and Accountability Act, has a central role in healthcare, which dutifully guards the patient’s right to privacy. HIPAA sets standard for how and when healthcare information can be shared and sets severe penalties for violations. Transferring medical records by individual faxes seems antiquated, but draws the question: how much is HIPAA responsible for this painful process?


Image credit: Byrd Pinkerton/Vox

Healthcare technology requires additional effort compared to other industries, which results in slower and more expensive processes. A common challenge is around using a web analytics tool for tracking the usage of a website. The two largest web analytics vendors are Google Analytics and Adobe Analytics, which both advertise the simplicity of adding analytic tracking to a website. Both companies leverage the IP address of the website visit to set metrics, a method that is not compliant with HIPAA. Neither vendor supports a configuration that doesn’t read the IP address. As a result, healthcare companies using an analytics solution will either need to find a complaint tool, which presents it’s own challenges, or implement workarounds to Google or Adobe Analytics for HIPAA compliance. Performing this work may be in the best interest of patient privacy, but it requires healthcare companies expend additional resources, time and energy.

Another challenge for a healthcare organization is the need for a “business associate agreement” (BAA) with all vendors that handle medical information. BAA’s are contracts for vendors that specify the safeguards around medical information as well as the liability of each partner involved. In many cases a vendor can’t be used, because they won’t sign a BAA. BAA’s also require determining who is liable for violations and responsible for penalties. This is a good principle, but it is slow, and requires companies to accept the liabilities. Often organizations will decide to avoid explicit risks. Instead organizations will opt to remain on legacy technology, which still have hidden risks but without the explicit assertions of liability within a BAA.

As much as requirements within HIPAA slow healthcare companies and make processes more expensive, healthcare companies also make choices that contribute to the problems in healthcare. Interoperability, the ability to share healthcare records across healthcare organizations, has been a goal of the US government since at least the 2009 HITECH Act that included digital health records and interoperability as core standards.

Many healthcare organizations have adopted digital health records, but interoperability progress has been more limited. Sharing medical data in a secure manner is already complicated, but interoperability is not prioritized for other reasons. There is no incentive to share records when a patient is switching healthcare providers. For hospital systems, reducing the burden of medical record sharing could make it easier to lose a customer to a competitor. HIPAA allows sharing of records across organizations for a patient care. So, the lack of interoperability can’t be totally blamed on HIPAA.

There are many challenges with HIPAA and in many situations it makes healthcare companies move slower and become more risk averse compared to other technology companies. But it also makes healthcare technology companies think explicitly about the risks they take and prioritize strategies to protect a person’s medical data. A challenge is other industries haven’t historically prioritized the protection and securing of a person’s data to the degree HIPAA requires. When protecting personal data healthcare companies are at the forefront, building the technology to support these standards. Compare this to when healthcare companies use technologies such as cloud computing, they are leveraging a second wave technology, which was refined in another area. Perhaps if more companies prioritized protecting user data, they could help healthcare companies fix some of the unnecessary burdens in healthcare that cause patients additional heartache.

Been There, Done That: What Data Science Can Learn from Psychology

Been There, Done That: What Data Science Can Learn from Psychology

by Kim Darnell on September 20, 2018

In the wake of recent revelations regarding misuses and abuses of personal data by a variety of well-known and successful companies, as well as the growing evidence that big data are being used to actively perpetuate and increase socioeconomic inequality, it might seem like data science as a discipline has wandered so far down a dark ethical path that there is no clear map to recovery.


Image credit: Getty Images

As a professor of psychology and a data scientist, however, I see something very different: A young field that is still trying to figure out how to maximize its potential in a fast-paced, dynamic world while still following a stable, practical moral compass. Psychology was there once, too, performing highly controversial studies, such as the Milgram experiment, which showed everyday Americans that, like their counterparts in Nazi Germany, they would engage in the potentially life-threatening torture of strangers if instructed to do so by an authority figure. Or the Stanford prison experiment, which revealed that even the most privileged among us can become predators or prey at the flip of a coin when placed in a prison environment.


Image credit: Yale University Manuscript and Archives

Today, data scientists and those who employ them are struggling publicly, if not painfully, to find the right balance between getting the data they want while respecting the rights of those they get the data from. Fortunately, psychology can offer a detailed and time-tested framework for making that struggle less difficult.

In the United States, any licensed psychologist or employee of a training program approved by the American Psychological Association (APA) is bound by the Ethical Principles of Psychologists and Code of Conduct, also known as the APA Code of Ethics. This code is centered on five principles that are intended to “guide and inspire … toward the very highest ethical ideals of the profession.” They include A) Beneficence and Nonmaleficence, B) Fidelity and Responsibility, C) Integrity, D) Justice, and E) Respect for People’s Rights and Dignity. Taken together, these principles and the rules they give rise to govern psychologists’ behavior in all areas of professional practice (e.g., therapy, research, education, public service) and describe how we must:

  • resolve conflicts of interest among our domains of practice, and that practice with the law;
  • interact with our colleagues and clients in a way that guarantees they understand what we are doing, why we are doing it, and what we think the consequences of our collective actions might be;
  • define the limits of our professional competence, as well as that of our colleagues and clients;
  • and address any mistrust or harmful effects that arise from our professional conduct, and do so in a meaningful and timely fashion.


Image credit: American Psychological Association

Of course, psychology’s model is not the only plausible source for a data science code of ethics. Data for Democracy, for example, has attempted to crowdsource a code of ethical conduct from the data science community itself, an effort supported by former U.S. Chief Data Scientist and data ethics evangelist, D.J. Patil. Others have proposed a data science code of ethics based on the Hippocratic Oath or the code of ethics for the National Association of Social Workers. Each of these approaches has their strengths and weaknesses, but none seems to offer the comprehensive perspective that the APA Code of Conduct does.

However we ultimately choose to resolve the crafting of a data science code of ethics, there are a few things we can be sure of. First, we as data scientists need to break the bad habit of asking for forgiveness rather than permission. If we don’t, the general public will become so mistrustful of us that they refuse to provide us with the honest and representative data we need to do our jobs well. Second, we need to avoid falling prey to the entropy of procrastination. Otherwise, we will find our own code of ethics defined for us piecemeal by various government entities, the majority of which have members who know far less about the ethics of human subjects research and data science technology than they do about their current polling numbers and chances for re-election.

Psychology as a discipline runs the gamut from social science to biological science, and thus has constructed its code of ethical conduct to function effectively in diverse intellectual, cultural, and professional contexts. Given that data science is facing a comparably Herculean but highly related task, it seems both reasonable and efficient for our young discipline to take advantage of the insight that psychology can offer and base our own code of ethics on its well-validated model.

 

Immanuel Kant, Jean-Jacques Rousseau, and Mark Zuckerberg Walk Into a Bar

Immanuel Kant, Jean-Jacques Rousseau, and Mark Zuckerberg Walk Into a Bar …
A Philosophy Beginner’s Analysis of the Facebook/Cambridge Analytica Scandal

My best friend in high school majored in philosophy in college; I majored in computer science. “It’s pointless to debate philosophy,” I would say, “You can’t deduce one true answer and it’s not going to be applicable in the real world!” She would try to explain things like existentialism and phenomenology to me while the only philosophers I could name were the ones that had mathematical theorems named after them. As I made my way into my master’s program in data science, I found myself suddenly surrounded by moral philosophy. It seems like every day there is a new story about ethical quandaries surrounding the value, privacy, collection, and retention of information.

Facebook has frequently been in the ethical hot seat with its sometimes precarious handling of user data. Most recently, Facebook allowed Cambridge Analytica to collect personally identifiable information of millions of users for the purpose of influencing voter opinions on behalf of politicians paying for their services. (Newcomb, 2018). Cambridge Analytica misrepresented themselves to use Facebook data for nefarious purposes. The ethical question here is: Is Facebook responsible for the outcomes of the apps and services that it provides user data to?

I wonder what the philosophers my friend talked about would have to say about Facebook’s behavior.


Groening Philosophers

Immanuel Kant believes that categorical imperative is the basis principle to determine whether one’s action is deemed to be ethically correct. He has proposed three versions of this:

1. Something is considered ethically correct if it can be made into universal law.
2. A person should be treated as an end and not the means to achieve an end.
3. Each individual should act as a member of an ideal kingdom where he or she is both the ruler and subject at the same time.

Kant attributes ethical goodness or badness to the action itself, not the outcome of the action. I think in this sense, Kant would find Facebook in the wrong because, even though Facebook didn’t know that Cambridge Analytica was out to do bad things, Facebook knew how much information it was making available to all apps and historically how their users responded to having their privacy invaded. Whether or not apps were exploiting user data (the outcome), Facebook making that available was still unethical. They used their users as a means to an end. Mark Zuckerburg is a Facebook user, but I wonder if he deliberates both as the king of the Facebook kingdom, and as a subject.


Social Contract

I think Jean-Jaques Rousseau might have a slightly different take on it. He garnered a lot of recognition for his written work on political philosophy, The Social Contract. Sounds awfully close to The Social Network, though Rousseau’s wasn’t nominated for multiple Academy Awards. Rosseau believes in a social contract between a governing body and its people and claims that people trade some individual rights for the benefits provided to society as a whole. (Delaney). So, as I think he might see it, Facebook could represent a governing body. Facebook offers its users a global platform to correspond with other users and, if they choose to participate, they give up some of their individual rights to that information in exchange for the access that they wouldn’t otherwise have without Facebook. Even though this might seem to be in Facebook’s favor, Rousseau also believed that laws are binding only when they are supported by the people. Through this lens, Facebook failing to be governed by the wishes of its users breaks the contract. I think Rousseau would agree with Kant even though they would arrive at it in different ways.

Maybe Rousseau and Kant wouldn’t have seen it that way at all. Everyone has a different opinion and problems don’t always fit neatly into the “good” or “bad” box. Socrates (or Plato depending on who you ask) had a much simpler view. He said that there is only one good: knowledge, and one evil: ignorance. (Ambury). I think he would see Facebook’s greatest offense as it was: obliviousness to the potential negative outcomes that were exploited by Cambridge Analytica. Undergraduate-me was right about one thing: there isn’t one true answer. It’s arguable how much culpability Facebook has for the outcomes in the Cambridge Analytica scandal, but many users (and probably philosophers) agree that Facebook acted unethically. Undergraduate-me was wrong too, though. The discussion about these issues is critical and we need to continue to have them to ensure that we hold others accountable for their actions.

Works Cited

Ambury, J. (n.d.). Socrates (469 – 399 B.C.E). Retrieved September 21, 2018, from https://www.iep.utm.edu/socrates/

Delaney, J. (n.d.). Jean-Jacques Rousseau (1712 – 1778). Retrieved September 21, 2018, from https://www.iep.utm.edu/rousseau/

Delaney, J. (n.d.). [Jean-Jacques Rousseau]. Retrieved September 20, 2018, from https://www.iep.utm.edu/rousseau/

Ellerton, P. (n.d.). [Groening Philosophers]. Retrieved September 20, 2018, from https://pactiss.org/wp-content/uploads/2011/09/SP.jpg

Loaiza, A. (2017, December 20). [The Social Network]. Retrieved September 20, 2018, from https://theimpactnews.com/columnists/the-new-girl-on-the-block/2017/12/20/reaction-to-the-social-network/

Newcomb, A. (2018, March 24). A timeline of Facebook’s privacy issues — and its responses. Retrieved September 21, 2018, from https://www.nbcnews.com/tech/social-media/timeline-facebook-s-privacy-issues-its-responses-n859651