Skip to content

Research explores liability risk of using AI tools in patient care

As the health care industry grapples with the best way to use artificial intelligence technologies to improve care, many clinicians may wonder what happens if patients are harmed, and who should be held liable.

Last year, large language models like ChatGPT were widely released for the first time, and within a few months, similar models were already being incorporated into medical record software.

Medicine rarely incorporates cutting-edge technology so rapidly, and the integration of AI tools makes many clinicians anxious. As the health care industry grapples with the best way to use these technologies to improve care, many clinicians may wonder what happens if patients are harmed, and who should be held liable.

Research led by Michelle Mello, PhD, JD, professor of law and health policy, is designed to provide some clarity regarding liability. AI software has not yet appeared in legal decisions with much frequency, so Mello and her co-author, PhD/JD candidate Neel Guha, analyzed more than 800 tort cases involving both AI and conventional software in health care and non-health-care contexts to see how decisions related to AI and liability might play out in the courts.

An article about their research published Jan. 18 in the New England Journal of Medicine. Mello discussed their findings and what it means for health care providers in this Q&A, which was edited and condensed for clarity.

How did you approach this research?

We investigated the extent to which litigation over AI-related personal injuries is already appearing in judicial decisions to understand the extent of liability risk. The signals that emerge from the courts specifically related to AI are pretty faint, but there are enough cases related to non-AI-enabled software causing injury to give us a sense of how courts are likely to approach these kinds of claims in the future.

That's important because lawyers tend to give advice that's very conservative. We didn't find that lawyers are advising clients not to use AI in medical settings, but we found presentation materials suggesting they are strongly warning clients about the liability risks of using AI in general. In my opinion, this could lead to overly conservative decision making -- not doing things that could really help patients.

How should health care providers be thinking about AI liability?

Clinicians and hospitals need to do a fine-grained assessment of risk versus benefits. If we don't have an understanding of these technologies and how they are used on patients, we don't really have the ability to influence them in a way that is patient-centered. If liability concerns are standing in the way of adoption of technology that could help patients, we need to understand whether that concern is really proportionate to the risk.

Michelle Mello

There's also a big question about who is liable for errors. For instance, the terms of use for ChatGPT say that the model should not be used to make decisions that could affect a person's safety and welfare, including medical decisions, and that Open AI (the company that makes ChatGPT) refuses to be held liable for any injuries. This implies that developers are concerned about liability, and one of the natural ways to address it is to shift the liability risk to others.  

Historically, courts have not enforced disclaimers where it's a product maker asserting immunity for causing personal injury. There is still a question about whether AI software is a product at all. Traditionally, a product is defined as a physical thing, and software hasn't always been included. That's changing as it has become embedded in things like cars, and there's already been case law defining autonomous driving software as a product.

How should liability risk be assessed?

We created a framework that health care organizations and clinicians can use to assess liability risk. There are four main factors to consider: the likelihood of model errors, the likelihood that humans or another system will detect errors and prevent harm, the potential harm if errors are not caught, and the likelihood that injuries would result in compensation.

Some AI tools have a higher potential to hurt patients than others and pose different levels of liability risk.

Michelle Mello

We recommend that health care providers use this framework to evaluate AI tools. It's important not to lump all of them together because some AI tools have a higher potential to hurt patients than others and pose different levels of liability risk. We also recommend health care providers push back on software liability disclaimers and bargain for terms of use that minimize the purchaser's liability risk.

Providers should require the developers to provide information necessary for risk assessment and monitoring and seek indemnification clauses to require that developers assume liability for errors in the model. If health care providers are developing their own tools in-house, they need to ensure that they have adequate liability insurance.

What are the main takeaways for health care providers?

Our message is that health care providers should be thoughtful and quite analytical about how much liability risk is likely to accompany particular AI tools and make decisions based on whether the risk outweighs the benefit. That's where Stanford Medicine is really a national leader in creating a centralized review process to look at any tool proposed for use on patients.

At the same time, being overly cautious about liability risk doesn't lead to care that is patient-centered. So, one of my hopes is that this research will encourage hospitals and clinicians to think about risks beyond AI -- to ensure that everything they do is benefiting patients and not hurting them.

Image: LALAKA

Popular posts

Category:
Cancer
A doctor, his cancer journey and a uniquely teachable moment

Bryant Lin has taken his diagnosis of stage IV ‘never-smoker’ lung cancer, which disproportionately affects those of Asian descent, and turned it into a medical school course. He hopes the world takes notes along with the students and Stanford Medicine community.