Chief Medical Officer Discusses Risks of AI in Healthcare Settings

Dr. Carrie Nelson, the chief medical officer at Amwell, recently commented on the risks of AI in healthcare settings. News

Updated   Published

Artificial intelligence (AI) is reshaping healthcare and promising transformative changes, bringing about opportunities to enhance patient care and reduce clinical workload.

Meanwhile, the implementation of AI technology comes with potential risks, which Dr. Carrie Nelson, the chief medical officer at Amwell, recently commented on in an interview with Health Care IT News.

"There's talk that ChatGPT could be used to help physicians respond to messages from patients that are received via patient portals – but is that the right use of AI in healthcare? As challenging as that inbox is, we must pause and assess the risks before charging ahead," commented Nelson.

"We also know longstanding healthcare system inequities and bias will insert themselves into and be potentially magnified by AI algorithms. This bias, including the type of data that has and hasn't been collected and documented in our medical records, limits the potential for AI to improve quality of care today, especially for vulnerable populations," she continued.

AI in Healthcare Settings

According to Nelson, some of the potential use cases for AI in healthcare settings are:

  • Documentation
  • Chat-based check-ins
  • Digital behavioral health tools
  • Referral management
  • Authorization requests
  • General administration

Other use cases include "certain aspects of care," as "automation could be used to gather information regarding a patient's complex family medical history or other risk factors for disease," said Nelson.

Nelson continued, explaining that the use of AI in healthcare could be pivotal as the advancements in medical science have exceeded the capacity of healthcare providers to fully utilize the body of knowledge and that AI can be used as a filter to refine and aid in diagnosing complex cases or formulating treatment strategies.

Quote

I would love to see us get to a point where physicians are using AI in efficient ways to enhance diagnostic accuracy.

When used correctly, Nelson is hopeful about the future of AI in healthcare, allowing clinicians to optimize their skills and time and focus on providing the greatest care they can to their patients.

To address the risks associated with AI in healthcare, Nelson commented that more time is needed with AI-supported care models, allowing for medical professionals to identify best usage and to establish guardrails, as "any margin of error in healthcare is unacceptable. While I'm optimistic, recent data shows that we still have a long way to go."

Editorial Team / Admin

allnurses specializes in Official allnurses account.

107 Articles   373 Posts
Specializes in Critical Care. Has 9 years experience.

I was reading an article the other day about how a software company already uses AI to provide rapid interpretations of CT scans in strokes. There are literally endless possibilities and I think this will be huge for healthcare. 

As a side note, I found the quote "any margin of error in healthcare is unacceptable" interesting. I kind of understand where he's coming from but certainties are something we rarely have the benefit of. Risks/benefits, pre-test and post-test probabilities, confidence intervals, sensitivities and specificities. I don't expect AI to replace decision making but hopefully will narrow our existing margins of error.

Edited by MaxAttack