Study highlights challenges in growing AI use for medical diagnosis 

*Click the Title above to view complete article on https://www.medicalnews.pk/.

2024-08-30T10:00:00+05:00 Press Release

A new study has highlighted the challenges posed by growing use of artificial intelligence (AI) in medical diagnosis, and integrating AI with clinical practice, besides introducing the concept of the "AI chasm."
The University of Adelaide research, published in The Lancet Digital Health, was conducted by Lana Tikhomirov, a PhD student at the Australian Institute for Machine Learning, along with Prof Carolyn Semmler and their team. 
As per the study, the "AI chasm" refers to the gap between the rapid development and commercialization of AI decision-making systems and our understanding of their practical value and impact in clinical settings.
“This gap can lead to issues such as automation bias, where AI system errors are overlooked, or the technology is misapplied,” explained Tikhomirov. “Misconceptions about AI also limit our ability to fully use this technology to complement human expertise.”
The researchers argue that, unlike in other high-risk fields such as aviation, where increased automation has been extensively studied, the implementation of AI in clinical environments remains underexplored. They suggest AI should be treated more like clinical drugs, rather than just devices.
The study underscores that while clinicians excel at making decisions based on contextual cues and their expertise, AI models lack this critical contextual awareness.
“The clinical environment is rich with sensory cues that aid in diagnosis, even those not immediately noticeable,” Tikhomirov noted. “For instance, the brightness of a nodule on a mammogram can indicate a specific type of tumor, and symptoms listed on imaging request forms can influence a radiologist’s focus.”
Clinicians develop a skill known as cue utilization, allowing them to quickly and accurately discern and prioritize relevant information from the clinical scene. This expertise is rooted in a practice called epistemic humility, where clinicians critically evaluate the validity of their knowledge and data.
In contrast, AI models do not possess this level of critical questioning and are constrained by the limitations of their datasets. Addressing these gaps is essential for maximizing AI’s potential to enhance clinical practice and improve patient care.

View More News