Skip to content

Is that blood test really necessary? AI could help decide

Researchers at Stanford have devised an algorithm that predicts how likely a diagnostic test, when repeated, will yield useful information.

Being thorough in medicine is a must -- but doctors concerned about over-testing are raising a new question: Is it possible to be too thorough?

Jonathan Chen, MD, PhD, assistant professor of medicine, says the answer is yes, particularly in the context of diagnostic blood testing.

Blood testing is a cornerstone of diagnostic medicine, but there's an increasing recognition that too much blood testing -- such as repeated tests -- yields diminishing results. Not only do the results of many repeat-blood tests remain unchanged, administering the same test over and over can have detrimental effects on patients.

"The financial downsides of unnecessary testing are often the most obvious, but there are a lot of other drawbacks, namely, the burden it can have on the patients themselves," said Jason Hom, MD, clinical assistant professor of medicine. "Patients in the hospital sometimes have to wake up at all hours of the morning to take these tests, making them delirious. And sometimes these tests are done so often the patients become anemic."

Now, by harnessing machine learning and a trove of de-identified patient data from Stanford, the University of California, San Francisco and the University of Michigan, Chen, Hom, and a team of researchers and physicians have taken an early step to trying to address that problem: creating an algorithm that can predict whether a given blood test will come back "normal."

A paper describing the algorithm appears in JAMA Network Open. Chen is the senior author of the paper and Song Xu, PhD, former postdoctoral researcher at Stanford, is the lead author.

Often, excess blood tests occur as a result of the just-to-be-sure argument -- as in, we'll run another test, just to be sure the result is what we think it is.

The need for reassurance is, in part, likely due to a lack of guidelines on what constitutes grounds for multiple blood tests.

"The general train of thought on re-ordering a lab test is that doctors probably shouldn't do it -- unless it's 'clinically appropriate,'" said Chen. "So what's a doctors supposed to do with that? It's not at all clear."

"So what we're trying to do with our algorithm is empower physicians with quantitative information so that they're not stuck guessing." Chen and his team emphasize, however, that the algorithm is not meant to make decisions for the doctor or patients -- it's a resource that provides evidence, which should be factored into each individual patient's case.

Simply put, the algorithm tells the doctor how likely it is that another test will produce a result that's different than the first one. "If you're not going to get new information from the test, then there's no point," said Chen.

Test runs of the algorithm have already started to reveal oft-repeated tests that doctors could start cutting back on immediately. Data used in the pilot study showed that some of these tests -- such as a blood test for hemoglobin A1c, which measures blood sugar -- are sometimes conducted so closely together, it's physiologically impossible for the value to change.

"Those are the low-hanging fruit, the repeat tests we can start to weed out immediately," said Chen.

To train the algorithm, Chen and his team took de-identified patient data including vitals, medical conditions, symptoms, lab test results, and more, and used it to show how often a blood test reported something abnormal or unexpected, given a person's medical characteristics.

They started by training the algorithm on data from Stanford, testing it's ability to predict results for patients at Stanford, UCSF and University of Michigan. To further ensure accuracy of their algorithm, and show that it could be applied to other institutions too, they also switched up the training protocol, separately training the algorithm with data from UCSF and then from University of Michigan.

Although there was some wiggle room, overall, all three variations, regardless of where the training data came from, yielded accurate predictive capabilities.

"This is a good first step to show that it's indeed feasible to use the data in this way to help reduce unnecessary lab testing," said Chen. "But ultimately, our idea is to have institutions use our method and technology but to develop their own algorithms based on their own data to generate the highest level of accuracy possible."

Photo by Pranidchakan Boonrom

Popular posts

Category:
Alzheimer's
One step back: Why the new Alzheimer’s plaque-attack drugs don’t work

A few closely related drugs, all squarely aimed at treating Alzheimer’s disease, have served up what can be charitably described as a lackadaisical performance. Stanford Medicine neurologist Mike Greicius explains why these drugs, so promising in theory, don’t appear to be helping patients much if at all.