Bias in Artificial Intelligence: What Nurse Leaders Need to Know | Knowledge Brush-Up

It is hard to imagine technology having racial or gender bias. But as Artificial Intelligence (AI) becomes more common in healthcare, there are important issues nurse leaders should be aware of.

Updated:  

This article was reviewed and fact-checked by our Editorial Team.
Bias in Artificial Intelligence: What Nurse Leaders Need to Know | Knowledge Brush-Up

In honor of Martin Luther King, Jr. Day, it seems appropriate to reflect on the conversations about race in the last year. One that stands out in my mind and that may have the biggest impact for years to come is halting the use of facial recognition software.

IBM went the farthest with their decision to stop developing or offering facial recognition software altogether. Microsoft and Amazon paused use by law enforcement to allow lawmakers time to come up with rules.

Why? Because several studies - including one by the federal government published in 2019 - concluded this software shows bias against minorities and women. It misidentifies people of color and women more often than white men, and in some cases was 100 times more likely to produce a false positive.

The implications go beyond facial recognition software itself. As Artificial Intelligence (AI) becomes more common in healthcare, nurse leaders should be asking questions:

  • Is the Artificial Intelligence we use biased?
  • How are we introducing bias to AI?
  • Could AI make existing health disparities even worse?

If you are new to the topic of Artificial Intelligence in healthcare, take a look at my Guide to AI for Nurses.

How Can AI Be Biased?

Most of us think technology is more objective than humans. We know there can be defects that cause strange glitches. But it is hard to imagine technology having racial or gender bias.

Machine learning is a form of AI where computer systems can learn and adapt without instructions. In order to learn the systems must be trained on large sets of data where information is tagged appropriately. For example, if you want to train the system to recognize orange cats, you would give it 1,000 pictures of different types of cats and label which ones are orange.

The application of machine learning to healthcare raises the questions of what these systems are learning and from whom. There are three problem areas where bias can be introduced: (1) lack of inclusiveness in the data used to train AI; (2) bias encoded in the data used to train AI; (3) algorithm errors paired with a lack of human critical thinking.

Lack of Inclusiveness in Data Used to Train AI

One powerful use for AI is in prediction. For example, Netflix uses AI to make better movie recommendations and Amazon uses AI to help you find products you want to purchase faster. Both of these applications of AI use information about you to predict what you may want in the future.

In healthcare, predictive applications could help identify someone likely to develop a health condition, like diabetes, or who will experience side effects of a drug or vaccine before they receive it. But, in order to make predictions, AI has to be trained on those large data sets, which is where bias can occur.

Joy Buolamwini, a researcher at MIT, focused her thesis around a strange interaction she had with the AI in her lab. She was using facial recognition software for a project. But for some reason the software could not tell that Ms. Buolamwini had a face.

When she saw it identify faces on the other people from her team, she realized it may be because she has dark skin. With further research she found the problem was a lack of diversity in data used for training the algorithm.

After testing facial recognition technology from three major software companies, she and her team found error rates of less than 1% for light-skinned men, but over 30% for dark-skinned women. They later discovered the software was trained on a data set that was 77% male and 83% white.

When we extend this issue into healthcare, it is easy to imagine how lack of inclusive data for training AI could make health disparities worse. An algorithm used to identify melanoma trained only only light skin would miss cases on dark skin. It is similar to how women were excluded from cardiac research, and researchers later discovered women present heart attack symptoms differently.

Bias Encoded In Data Used to Train AI

Bias can also enter AI through the way training data is tagged and categorized. For example, clinical notes are widely used in areas like psychology and social work. Natural Language Processing - a form of AI that understands and interprets human language - can extract information from those notes for machine learning.

However, that also means bias baked into those notes can also carry over. If a social worker routinely describes female patients as 'dramatic,' their notes could lead to algorithms that cannot detect anxiety disorders in women.

This is not a far-fetched example. A recent study used Google's cloud image recognition service to evaluate pictures of male and female politicians. It gave men labels like 'official,’ 'white collar worker,’ and 'business person' while it gave women labels like 'smile,’ 'beauty,’ and 'hairstyle.’

This happened because the datasets of labeled photos to train these algorithms already had gender bias, such as showing women cooking and men going to work. The large technology companies are trying to address this problem by being transparent about the bias in their AI instead of correcting the bias. It is basically like a nutrition label on fast-food.

Algorithm Errors and Lack of Human Critical Thinking

One of the big examples of AI bias is the criminal risk assessment instrument used to determine how likely someone is to commit another crime. This algorithm made headlines when a tool used in Wisconsin, New York, California, and Florida labeled African American defendants twice as likely to commit another crime as white defendants.

The criminal risk algorithms take information such as where the defendant lives and employment status to create a score. Judges in states with this tool have used the score to impose harsher sentences.

This resulted in an African American teenage female with no prior offenses receiving a harsher sentence for stealing a bicycle than a middle-aged white male who shoplifted and had several prior arrests. Follow-up done several years later found the man went on to commit armed robbery, while the young woman had no further crimes.

This example highlights how humans cannot afford to turn off their critical thinking while working with technology. Algorithms can produce errors and are not necessarily more objective.

Moving back to the healthcare context, algorithms are not free from error here either. In a study looking at prediction error in psychiatric readmissions at a New England Hospital, the model was found to have a higher error rate for predicting readmission for African-American patients than any other group, and the rate of error for women was much higher than for men.

The bottom line is that nurse leaders need to stay alert, questioning, and cautious.  As machine learning becomes increasingly involved in health care decisions, it will be crucial to look at the impact on different demographic groups.

What Can Nurse Leaders Do to Make Sure AI Supports Health Equity?

There are ways in which AI could help to decrease health disparities if channeled in the right direction. To get there, nurse leaders should advocate for guidelines and a shared common goal of eliminating these disparities. At a minimum we will need:

Counter-bias algorithms to test and correct for systemic discrimination

This should be made a basic part of the process prior to an application's approval for use in healthcare.

Greater diversity in data science training and workforce

We are lucky that Joy Buolamwini was in that MIT lab, working with that technology at that time. But what if she was not there? What if she had a different project that did not use facial recognition software? We should not have to rely on luck. Healthcare leaders should push for diversity both at a national level as a requirement for research funding, but also as part of the criteria for AI selection by health systems.

Education of the healthcare workforce that includes how to evaluate algorithm results

We need to do better than the legal system and question the output, especially when it goes against our better judgement.

In Closing

I believe in the promise of what AI can do for humanity. But I also see how important it is for us to understand what tools we are using, who built them, how they were trained, and their impact. To avoid the pitfalls of the legal field, those of us in healthcare must question the technology.

We cannot simply delegate all critical thinking to the algorithm and hope it is right. We are moving into the next great age technology - we must try to leave our biases behind us.

Resources/References

Can AI Help Reduce Disparities in General Medical and Mental Health Care?

Exploring the Potential of Artificial Intelligence to Improve Minority Health and Reduce Health Disparities

Big Data Science: Opportunities and Challenges to Address Minority Health and Health Disparities in the 21st Century

AI Could Worsen Health Disparities

Lisa Brooks, RN, MSN, MBA is a nurse and writer on a mission to help people transition to the digital health era.

4 Articles   27 Posts

Share this post


Share on other sites

'Humans cannot afford to turn off their critical thinking while working with technology. Algorithms can produce errors and are not necessarily more objective.'

That statement essentially recognizes that humans are the problem! 

The first solution is to stop promoting people who accumulate qualifications and only those who have ability, insight and awareness. These are the people who are more likely to train their teams to accommodate bias and to think outside of the box like themselves. 

We are obsessed with qualifications in nursing when the health care we dispense can be done by the individuals who like working the floors and are equally or better equipped to recognize problems and address them effectively. It's their passion as opposed to someone on the academic track who in my opinion has more esoteric thoughts and frequently fails to recognize the obvious in front of them. 

This is about AI in healthcare and NOTHING beats the abilities of a human brain with experience. AI has its place in sifting data, pattern recognition, lab tests etc. It works as a tool for our benefit BUT just as Tesla, GM, BMW etc are finding out re self driven cars, AI appears to work only when the environment is OPTIMIZED for their algorithms. 

When the conditions are unchanging and there are multiple back up, checks and balances for redundancy purposes, AI is probably unbeatable! 

Specializes in ED, Tele, MedSurg, ADN, Outpatient, LTC, Peds.

Excellent article! Got me thinking---! I am wondering if I should have a conversation with our informatics team who are working on bundles of data on different DRG like CHF,MI,Sepsis etc!

Thanks, Lisa, for such an awesome and informative post! Because racism and bias are present in healthcare, and technology is necessary for this era, it's the human who creates a crack in the system that allows for the bias in AI to exist.  This is a wake-up call for all members of the healthcare team. Please continue to get the word out and into the ears of healthcare leadership. The patients and clients we serve matter!

On 1/21/2021 at 2:52 PM, spotangel said:

Excellent article! Got me thinking---! I am wondering if I should have a conversation with our informatics team who are working on bundles of data on different DRG like CHF,MI,Sepsis etc!

Just wondering if I can throw a science fiction curveball? 

Due to the fact that AI involves programming and we've recently learned of the huge computer hack by Russia AND that I can't exactly remember when, but within the last decade, when Russia hacked the Eastern seaboard electrical grid and blacked out huge areas for hours AND what Russia recently did to the electrical grid in Ukraine, just wondering, what happens say if a computer lunatic loses someone and decides to crash a hospital system? 

We are aware of the idiots attempting to and successfully hacking company systems and stealing personal information etc. What happens say in real science fiction terms when the 'ethical insurance companies' decides to hack hospital databases and mine information to refuse medical care to potential patients? 

What happens when foreign enemies decides to backdoor a system, for example, at a military hospital in the hopes of hurting an important hospitalized person? 

I seriously think that until AI, like social media companies are seriously regulated and studied minutely for potential complications, WE SHOULDN'T BE indulging too deeply. I am actually very afraid of AI because of the lack of human oversight re our inbuilt moral and ethical standards. 

Even Asimov's three laws were breached! 

Specializes in ED, Tele, MedSurg, ADN, Outpatient, LTC, Peds.

Truth is stranger than fiction!

Again it comes down to choices. to do right or wrong.

A recent breach of confidentiality was from an employee within an institution who sold patient information for profit, was fired and hopefully will be prosecuted.

 Apparently, you don't need to breach fire walls and layers of  IT security. Just misuse your freedom of access to information instead. Luckily the institution noted unusual activities of that individual as part of their audits and initiated monitoring to get to the truth.

Like everything else AI can be used for good and bad. The final choice is ours.

30 minutes ago, spotangel said:

Truth is stranger than fiction!

Again it comes down to choices. to do right or wrong.

A recent breach of confidentiality was from an employee within an institution who sold patient information for profit, was fired and hopefully will be prosecuted.

 Apparently, you don't need to breach fire walls and layers of  IT security. Just misuse your freedom of access to information instead. Luckily the institution noted unusual activities of that individual as part of their audits and initiated monitoring to get to the truth.

Like everything else AI can be used for good and bad. The final choice is ours.

Which is why it has to be acutely regulated because giving autonomy to AI, is a Terminator scenario. I know that's extreme thinking but we have numerous instances OF BAD PEOPLE trying to turn over elections using algorithms to facilitate the outcome. 

As you say it can be good or bad, but AI is in a position to create incredibly harmful effects from individuals or govts with very little dollar investment especially in this fractured world of today. It can be so easily misused and the penalties are so trivial that they need revisiting. 

Vaccine production is software managed. Can you imagine the chaos of any impropriety? Eg Opioids, Sackler? Mass murder? No one is going to jail and they have spirited away their money. This might seem peripheral to our discussion but if the perpetrators are slapped on the hands, what's to prevent malicious future behaviors? 

You cannot have govt overreach because of infrastructure, therefore you need stiff penalties and jail time for misuse and crimes re. Too many managers are LAZY AND INCOMPETENT and will mandate AI usage, to reduce costs and increase productivity at the expense of lives and potential harm. Install huge penalties because as the autonomous car's response to an accident, IS IT THE DRIVER OR THE SOFTWARE ENGINEER? 

Specializes in ED, Tele, MedSurg, ADN, Outpatient, LTC, Peds.

I hear your concern! The potential ifs are worrisome.

Maybe I am naive and believe that we all have a kernel of goodness in our core.

I still have hope in mankind to choose Good over evil!

May the good forces prevail in all scenarios!

The alternative will create fear, confusion, control and chaos!

Peace!

Specializes in Informatics, Managed Care.

There is some great conversation in here @spotangel and @Curious1997! I do think we will need more regulation to prevent bad actors from using AI to hurt others. The scenario of AI being used to hack health information systems is not all that far-fetched. Last year there was an incident in Germany where hackers held a hospital's health information system for ransom, and a woman died as a result (https://www.technologyreview.com/2020/09/18/1008582/a-patient-has-died-after-ransomware-hackers-hit-a-german-hospital/).

With AI tools in the wrong hands, these sorts of incidents could become bigger and more frequent.

Another question with AI in healthcare is what happens when AI makes a medical mistake? Who is liable? I have an article coming up soon about that, so I will come back to this forum to share it.