Skip to content

How to regulate AI? Bioethicist David Magnus on medicine’s critical moment

The applications for AI in medicine are being explored deeply at Stanford Medicine and elsewhere. Putting guardrails in place now is crucial.

Supercharged AI tools are knocking at biomedicine's door. Predictions that they'll solve our most pressing health care challenges compete with doomsday scenarios for daily headlines. The White House Office of Science and Technology Policy recently proposed a "Blueprint for an AI Bill of Rights" that contains five principles for guiding the safe and ethical implementation of AI into the wild.

These principles apply to medical research and health care as well as to commercial and private products. They include the right to safe and effective AI systems, protection from algorithmic discrimination, protection from data privacy violations, the right to opt out of use, and the right to know when an AI is being used and how and why it might affect you.

Commendable as they are, these proposed "rights" lack federal enforceability, according to a new Science Translation Medicine paper. One of the paper's lead authors, David Magnus, the Thomas A. Raffin Professor of Medicine and Biomedical Ethics said that, to achieve the aims of the AI Bill of Rights, multiple regulatory mechanisms across various industry sectors and agencies are urgently needed.

Magnus, director of the Stanford Center for Biomedical Ethics, played key roles in creating regulatory frameworks for human stem cell research and other emerging technologies. Now, he discusses how to navigate the AI landscape as it plays a growing role in health care and medicine.

How can AI help physicians, researchers, and patients, if deployed thoughtfully and cautiously?

AI could help us see dramatic improvements in efficiency and access to different aspects of health care. It could lead to improvements in research and expedite the development of new treatments and pharmaceuticals. We could see dramatic improvements in the implementation of genetic research and see an acceleration of precision medicine.

For example, by quickly analyzing huge amounts of genomic data, AI could help tailor medical treatments to an individual cancer patient's genetic makeup, leading to more effective treatments with fewer side effects. AI could also bring greater equity to the health care system by, say, identifying and eliminating biases in healthcare decision making and by improving remote access to expertise, and thus early diagnosis, for underserved populations.

But we want to be careful about expectations. Big advances such as these may still be decades away.

What are the challenges?

AI can be biased. It has the potential to improve equity, but it also has the potential to reinforce and amplify existing inequality in access to health care. Biases already in our data -- for instance, against assigning donor organs to developmentally disabled patients, who in fact do very well after transplant -- could wind up getting perpetuated if an algorithm incorporated that biased data into its evaluation process.

There are also dual-use risks, when technology developed for a good purpose is commandeered to do something harmful. For example, scientists were caught off guard when some AI researchers, who were using AI models to identify new targets for drug development, realized they could easily turn their tools into a bioweapons design factory.

Dual use already got a lot of attention in synthetic biology and gain-of-function research, but it only recently emerged in the AI space, where I think it's potentially an even bigger problem.

You and your colleagues just published a paper in Science Translational Medicine about how best to approach mitigating the risks of AI with good policies and regulations. What are you proposing?  

We need a flexible regulatory system that maximizes the benefits and simultaneously mitigates the dangers. It's not obvious how it's going to work here, compared, say, to the EU where the governments actually can pass (and have already passed) regulatory legislation. Here, that's just not going to happen. Our federal government, for better or worse, was set up to make it very hard to pass federal legislation. So, our regulatory system is necessarily going to be dispersed.

While Food and Drug Administration regulations will be key, the agency has already drawn firm boundaries around what they're regulating and what they're not. They regulate some AI healthcare applications, but not all. Similarly, while Institutional Review Boards are good at protecting human subjects, when it comes to dual-use applications, they aren't even allowed to consider dire downstream ramifications in their review process. They may only consider how the research itself could harm a study's participants.

Instead, our paper focuses on less centralized levers that already exist and can be employed to both maximize the benefits and mitigate the risks. For example, funders can have a huge influence. The National Institutes of Health can play a role by making funding requirement contingencies for AI research; this could select against studies with dual-use potential, for example, or that did not go far enough to protect privacy. The National Science Advisory Board for Biosecurity already addresses dual-use concerns for synthetic biology and gain-of-function research. We could create a parallel body to NSABB or extend its scope to include AI.

More than a decade ago we created Stem Cell Research Oversight committees at research institutions across the country to address ethical and safety concerns that didn't fall under the purview of traditional IRBs. We should replicate that model for AI in biomedicine.

On the clinical side, Centers for Medicare and Medicaid Services could have a huge regulatory impact by imposing regulations on the use of AI among the practitioners and institutions they compensate for medical services to patients. The Joint Commission on Accreditation of Healthcare Organizations, which can withhold accreditation from institutions that don't meet its ethical and safety standards for AI, could also play an important role. Finally, there are powerful state agencies, like the California Department of Public Health, that can set ethical and safety standards for the use of AI in state medical practice.

It's a lot easier to pass state regulatory legislation, and legislative proposals are already underway. For example, legislation just proposed in California would require large AI systems to be subject to transparency requirements and would establish liability for companies that fail to take sufficient precautions against harm done by their AI products. There's also something called the Uniform Law Commission which could amplify the effects of good state legislation. Some states can essentially say, 'We have effective practices and we think all the states ought to pass similar laws.'

It seems kludgy, but a real advantage to a decentralized approach is that it can be nimble, which will be key in such a quickly evolving field. For example, funders like the NIH can change their funding policies quickly, in response to emerging challenges or threats, and that can change norms, behaviors, and practices to address them.

How are researchers at Stanford Medicine approaching AI regulation?

We are experimenting with something called the Ethics and Society Review. All of Stanford Human-Centered Artificial Intelligence (HAI) seed funding and Hoffman-Yee grants, which fund AI research, must have an ethical and societal review statement where they capture, for example, potential downstream issues.

As worried as I am about not having enough regulation, I'm equally concerned about disabling the technology with too much regulation. Getting the balance right is tricky, but necessary, and possible.

Photo: Suri_Studio

Popular posts