Every day, physicians make decisions that directly affect patient wellbeing -- such as whether to refer a patient for further evaluation. In breast cancer pathology, a 2% chance of malignancy is the threshold at which a radiologist refers the patient for further study. How do radiologists make that close call when reading a mammogram, and what are the consequences of being too careful, or too lax?
In a recent episode of "The Future of Everything," host Russ Altman, MD, PhD, and guest Ross Shachter, PhD, discussed the role of AI in improving malignant tumor detection accuracy, and the ways AI and physicians can work together to make the best decision for patients.
In the interview, Shachter explains that the risks are high for radiologists diagnosing patients via a single image like in the case of a mammogram. A more conservative approach to diagnosis may lead to a false positive diagnosis, in which recommendation of further study leads to unnecessary worry for the patient that they have cancer, as well as unnecessary use of resources. At the other end of the spectrum is a false negative, in which a patient is told they're fine when they are not.
That's why Shachter's recent work seeks to find the ways that integrating probability and decision theory into artificial intelligence applications could help physicians better evaluate patient options and outcomes:
How can they help us improve the performance of physicians making this call between the false-positive and false-negative?
Shachter has also found that radiologists currently lean toward a conservative approach:
They're making a holistic judgment and in our study most of the radiologists were operating at around 1%, so they were being extra cautious and causing a lot more false positives in order to avoid the false negatives. We did see some radiologists in our study who were up around 3%, so there can be a lot of variations among radiologists and within and across radiological practices.
The key is to have AI shine light on probability, using data on what diagnoses radiologists tend to lean towards in each case presented. It's like a guide, Schachter explains:
They can work in partnership with a system that points out to them, whether it's what to look at in the image from detection systems or from our analytical system, whether they want to think twice about this particular judgment. It's their call, but it's better for them to have extra information by working in partnership with the AI system. The AI system is going to be vigilant. It's not going to be emotional. It's not going to be biased. It doesn't get tired.
To learn more about the role of AI in disease detection, and a deeper look into decision theory, give the episode a listen.
Photo by Bill Branson