Doctors See the Patient – AI Sees Everything Else

Doctors See the Patient – AI Sees Everything Else
By Andi Morey Peterson | February 23, 2022

As a woman who has never fit into the ideal mold of what it means to be a physically healthy 30-something female, I am quite excited about the prospects of what machine learning can bring to the healthcare industry. It took me years, nearly a decade, to find and interview physicians to find the one that would not just focus on the numbers and that would take my concerns seriously as a woman.

I had all too often felt the gaslighting many women experience when seeking health care. It has been reported often that women’s complaints get ignored easier or pushed off as “normal”. We get seen as being anxious and emotional and it has been proven we wait longer to receive relief when expressing pain”[1]. Black women and other minorities have it even worse. For example, black women die at three times the rate as white women during pregnancy[2]. They are recommended fewer screenings and prescribed less pain medication. Knowing this, the question now is: can machine learning help doctors correctly diagnose patients while ignoring their bias? Can they be more objective?

What we can look forward to:

Today, we are seeing the results of machine learning in our health care systems. Some emergency rooms are using AI to scan in paperwork saving clerical time and using NLP to document conversations between doctors and patients. And researchers are building models that can use computer vision to better detect cancer cells[3]. While all of this is very exciting, will that truly improve patient care?

We want to fast forward to the days where social bias will decrease as more machine learning algorithms are used to help doctors make decisions and diagnoses. Especially as gender becomes more fluid, algorithms will be forced to look at more features than what the doctor sees in front of them. In a way, a doctor, with their bias, sees the patient and their demographics, but the algorithms can see everything. In addition, as more algorithms are released, the more doctors can streamline their work, thus decreasing their errors and reducing the amount of paperwork.

We must remain diligent:

We know that with these solutions we must be careful. Most solutions will not apply to all patients and some solutions simply don’t work no matter how much training data we throw at it. IBM Watson’s catastrophic failure to even come close to real physician knowledge is a good example[4]. It saw only the symptoms, it didn’t see the patient. Worse, unlike other simple ML models, such as Jeopardy (which Watson dominated), what is considered “healthy” is often disagreed upon at the highest level of doctors[5]. The industry is learning this is the case and is heavily focused on fixing these issues.

However, if one of the goals of AI in healthcare is to remove discrimination, we ought to tread lightly. We cannot just focus on improving the algorithms and fine-tuning the models. Human bias has a way of sneaking into our artificial intelligence systems even when we have the intention of making them blind. We have witnessed it with Amazon’s recruiting systems being biased against women and facial regnonitions systems being biased against people of color. In fact, we are starting to see it in previously released models in predicting patient outcomes[5]. We must feed these models with more accurate and unbiased data; this will be the only way we can make sure we can get the best of both worlds. Otherwise, society will have to reckon with the idea that AI can make healthcare disparities worse, not better. With the Belmont Principle of beneficence, we must maximize the benefits and minimize potential harms and that should be at the forefront of our minds as we expand AI in healthcare[6].

My dream of an unbiased AI to handle my health care is not quite as close as I had hoped. My search for a good doctor will continue. In the future, the best doctors will use AI as a tool in their arsonal to help decide what to do for a patient. It will be a practice of art, knowing what to use and when and more importantly knowing when their own biases are coming into play so that they can treat the patient in front of them and so they can be sure the data fed into future models isn’t contaminating the results. We need the doctor to see the patient and AI to see everything else. We cannot have one without the other.

References:
[1] Northwell Health. (2020) Gaslighting in women’s health: No, it’s not just in your head
[2] CDC. (2021) Working Together to Reduce Black Maternal Mortality | Health Equity Features | CDC
[3] Forbes. (2022) AI For Health And Hope: How Machine Learning Is Being Used In Hospitals
[4] Goodwins, Rupert. The Register. (2022) https://www.theregister.com/2022/01/31/machine_learning_the_hard_way/
[5] Nadis, Steve. MIT. (2022) The downside of machine learning in health care | MIT News
[6] The Belmont Report. (1979) https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf