Skip to content
photo of paper medical records

Stanford researchers probe the ethics of using artificial intelligence in medicine

Physicians should consider the ethical challenges of using artificial intelligence in making patient care decisions, three Stanford University School of Medicine researchers say in a perspective piece in The New England Journal of Medicine.

No encounter I've ever had with my physician has been recorded, nor has there ever been a scribe -- human or electronic -- taking notes on everything we've said.

Still, I'm not naive enough to believe that information from my record isn't ending up in some huge database to be tapped for making predictions about outcomes for other patients. In fact, I know that it is already being shared. And I'm OK with that.

But now, three Stanford University School of Medicine researchers are calling for a national conversation about the ethics of using artificial intelligence in medicine today.

In a perspective piece appearing in The New England Journal of Medicine, the authors acknowledge the tremendous benefit that machine-learning tools can have on patient health, but they say the full benefit can't be realized without careful analysis of its use.

"Because of the many potential benefits, there's a strong desire in society to have these tools piloted and implemented into health care," said lead author Danton Char, MD, assistant professor of anesthesiology, perioperative and pain medicine, in our news release. "But we have begun to notice, from implementations in non-health care areas, that there can be ethical problems with algorithmic learning when it's deployed at a large scale."

The press release explains:

David Magnus, PhD, senior author of the piece and director of the Stanford Center for Biomedical Ethics, says bias can play into health data in three ways: human bias; bias that is introduced by design; and bias in the ways health care systems use the data.

'You can easily imagine that the algorithms being built into the health care system might be reflective of different, conflicting interests,' says Magnus.

The authors' concerns include that data could become an "actor" in the doctor-patient relationship and in clinician decision-making, with the potential for data to unintentionally be given more authority than human experience and knowledge.

"The one thing people can do that machines can't do is step aside from our ideas and evaluate them critically," Char told me.

Another challenge is that clinicians might not understand the intentions or motivations of the designers of the machine-based tools they're referencing. For example, a system might be designed to cut costs or to recommend certain drugs, tests or devices over others, something clinicians wouldn't necessarily know. 

The authors acknowledge the social pressure to incorporate the latest tools in order to provide better health outcomes for patients. But they urge physicians to become educated about the construction of machine-learning systems and about their limitations.

"Remaining ignorant about the construction of machine-learning systems or allowing them to be constructed as black boxes could lead to ethically problematic outcomes," they write.

Co-author Nigam Shah, MBBS, PhD, associate professor of medicine, added that models are only as trustworthy as the data being gathered and shared: "Be careful about knowing the data from which you learn." 

Photo by Maarten van den Heuvel

Popular posts