by Rhiannon Koch, University of Adelaide
The ecological filter radiologists use to examine clinical information. Credit: The Lancet Digital Health (2024). DOI: 10.1016/S2589-7500(24)00095-5
While the use of artificial intelligence for medical diagnosis is growing, new research by the University of Adelaide has found there are still major hurdles to cover when it is compared to a clinician.
In a paper published in The Lancet Digital Health, Australian Institute for Machine Learning Ph.D. student Lana Tikhomirov, Professor Carolyn Semmler and team from the University of Adelaide, have drawn on external research to investigate what's known as the "AI chasm."
The AI chasm has occurred because development and commercialization of AI decision-making systems has outpaced our understanding of their value for clinicians and how they impact human decision-making.
"This can have consequences such as automation bias (being blind to AI errors) or misapplication," said Tikhomirov. "Misconceptions about AI also restrict our ability to maximize this new technology and augment the human properly.
"Although technology implementation in other high-risk settings, such as increased automation in airplane cockpits, has been previously investigated to understand and improve how it is used, evaluating AI implementation for clinicians remains a neglected area. We should be using AI more like a clinical drug rather than a device."
The research found clinicians are contextually motivated, mentally resourceful decision makers whereas AI models make decisions without context or understanding correlations in data and patients.
"The clinical environment is rich with sensory cues used to carry out diagnoses, even if they are unnoticeable to the novice observer," said Tikhomirov.
"For example, nodule brightness on a mammogram could indicate the presence of a specific type of tumor, or specific symptoms listed on the imaging request form could affect how sensitive a radiologist will be to finding features.
"With experience, clinicians learn which cues guide their attention towards the most clinically relevant information in their environment.
"This ability to use domain-relevant information is known as cue utilization, and it is a hallmark of expertise that enables clinicians to rapidly extract the essential features from the clinical scene while remaining highly accurate, guiding subsequent processing and analysis of specific clinical features.
"An AI model cannot question its dataset in the same way clinicians are encouraged to question the validity of what they have been taught: a practice in the clinical setting called epistemic humility."
More information: Lana Tikhomirov et al, Medical artificial intelligence for clinicians: the lost cognitive perspective, The Lancet Digital Health (2024). DOI: 10.1016/S2589-7500(24)00095-5
Provided by University of Adelaide
Post comments