Will artificial intelligence (AI) replace radiologists? During a session on AI and imaging yesterday at the Big Data in Biomedicine conference, panelists preempted this question (which keeps some radiologists up at night) by clarifying how, at least for now, AI isn’t a replacement for doctors, but a tool to help them.
“The human-machine system always performs better than either alone,” said Curt Langlotz, MD, PhD, a professor of radiology and biomedical informatics at Stanford. And while AI is achieving human-level performance, it’s not necessarily superseding it — yet.
All panelists spoke about AI’s capacity to increase efficiency. With deep learning, AI can identify patterns across vast datasets of images, with volumes in the petabytes (1 plus 15 zeros), to achieve computer-aided detection and classification of disease.
As an example of this efficiency in workflow, Langlotz explained how, as a chest radiologist, he has “70 ICU chest x-rays ready for reading” every morning. A small fraction will contain an abnormality, but he doesn’t know which ones. “It would be great if there was an algorithm to flag those, pull them to the top of my list so I could see those first,” he said.
A second optimistic theme of the panel was the potential of AI’s reach in the developing world, where physicians and specialists are rare and there are important opportunities for early and accurate diagnoses.
Panelist Greg Moore, MD, PhD, VP of healthcare for Google Cloud, described how AI could address “scarcity and error” in underserved areas of the world. Billions of people live in “radiology scarce zones,” he pointed out, and more than “43 million people are affected by medical errors annually.”
Google’s first medical imaging project was a deep learning algorithm to recognize diabetic retinopathy, the fastest growing cause of blindness. In countries like India, where a shortage of specialists meant “45 percent of patients went blind before a diagnosis,” AI can help recognize the condition so it can be treated earlier.
Similarly, Justin Ko, MD, medical director and service chief of medical dermatology for Stanford Health Care, spoke about the creation of a deep neural network to analyze and identify precancerous lesions. He asked inspiring questions: “Could we eradicate melanoma because we can catch it earlier? Can we extend diagnosis to remote areas of the world?”
AI is evolving rapidly, but radiologists have a job for the foreseeable future, the panelists agreed.
Radiologists still need to validate reports, and humans have the advantage of being able to examine the patient holistically. Ko added, “Context is everything. We [dermatologists] don’t look at a lesion in isolation. We look at the rest of the skin… rather than a single artificial task.”
Langlotz also reiterated a caution about the capabilities of AI to develop insights that “humans have developed for decades.”
During his presentation, John Axerio-Cilies, PhD, CTO of Arterys, a medical imaging startup, explained how his company is addressing patient privacy and negotiating regulations, two of the complex and far-from-resolved issues that make AI challenging to scale. “There’s a lot of infrastructure required,” he noted.
Progress has been made in building large datasets of images, but the panelists pointed out that integrating different types of data and creating consistency standards for the various stakeholders moving around all this data are important next steps. In short, more work needs to be done.
Previously: Big Data in Biomedicine Conference kicks off on Wednesday, Enlisting artificial intelligence to assist radiologists and Artificial intelligence could help diagnose tuberculosis in remote regions, study finds
Photo of panel by Rod Searcey