Skip to content

When it comes to health care, will AI be helpful or harmful?

Stanford Medicine researcher Jonathan Chen discusses the promise and danger of using AI, such as ChatGPT, in medicine.

Artificial intelligence algorithms, such as the sophisticated natural language processor ChatGPT, are raising hopes, eyebrows and alarm bells in multiple industries. A deluge of news articles and opinion pieces, reflecting both concerns about and promises of the rapidly advancing field, often note AI's potential to spread misinformation and replace human workers on a massive scale. According to Jonathan Chen, MD, PhD, assistant professor of medicine, the speculation about large-scale disruptions has a kernel of truth to it, but it misses another element when it comes to health care: AI will bring benefits to both patients and providers.

Chen discussed the challenges with and potential for AI in health care in a commentary published in JAMA on April 28. In this Q&A, he expands on how he sees AI integrating into health care.

How can AI affect patient care? Are there any dangers in how people are harnessing AI in health care?

The algorithms we're seeing emerge have really popped open Pandora's box and, ready or not, AI will substantially change the way physicians work and the way patients interact with clinical medicine. For example, we can tell our patients that they should not be using these tools for medical advice or self-diagnosis, but we know that thousands, if not millions, of people are already doing it -- typing in symptoms and asking the models what might be ailing them.

A few years ago, I tested one of these language models on medical board exam questions used to test physicians to see how it would do, and it was about 35% accurate. AI has come a long way -- just a few months ago, a chat bot passed the United States Medical Licensing Examination, but that's not reflective of what a human physician could reason through in a clinical setting. These models don't think. They don't understand. They provide an elaborate way to guess the next word in a sentence, which can seem very convincing. When a human doctor doesn't know the answer to a question, they'll say, 'I'm not so sure about that. Here's my best guess.' Compare that with ChatGPT, which just makes stuff up as it goes. That's a lot more dangerous because it seems so confident.

What are the potential benefits of integrating AI into medicine?

For better or worse, modern physicians don't spend most of their day actually seeing patients. We spend our time filling out electronic medical records and other forms and documents. AI-powered tools might streamline a lot of that work. That would allow us to focus more on taking care of patients and giving them more of our time and critical attention.

You can imagine new workflows and models of care delivery that sort and prioritize patient advice. Right now, patients call a nurse, or they send a message to their provider, and somebody gets back to them later. It's entirely feasible there could be an automated system that is the patient's first point of contact with the health system. Of course, we'll want to have that overseen by a doctor, but it would streamline a lot of our patient documentation and onboarding processes.

These models won't solve all our problems, but they're poised to help us reach more patients than we could have before.

Is the health care industry ready to deal with the challenges and potential of AI?

We're not ready. This is legitimately a very disruptive technology that will substantially change the way we work in the near future. What happens to medical education when a bot can automatically draft responses that are good enough to pass a medical exam? How do we assess the relevant skills and knowledge we want clinicians to have?

I tell residents and trainees that no one should be using these models as a medical reference yet. Clearly, we need to do a much more rigorous evaluation to assess how well these models perform. There are probably a dozen companies that are already trying to develop an AI model with enough guardrails to be used as a clinical reference, but we're at an awkward stage, because we don't know how to fully evaluate, let alone regulate, these types of technologies. We need to evaluate not the systems themselves, but how the use of such systems affects patients and clinicians. Pandora's box is already open, which leaves us racing to figure out how to adapt clinical practice in a way that is safe, responsible and effective.

Photo by Production Perig

Popular posts