As a surgical resident at Stanford, Jeff Jopling, MD, assumed he’d get plenty of pointers from his senior colleagues in the operating room about how well he was doing in the course of a procedure. That way, he could make adjustments and improve on his hands-on skills. But he was surprised at how little input he received.
“Even when I do the 1,000 operations for my training, I get very little feedback on most of those surgeries,” Jopling said. “I was surprised by that as a trainee. I thought it would be like a sport or music, where you have a coach saying, ‘Do this. Don’t do this.’ Exceptional teachers provide that, but not everyone does. Not everyone can explain what you are doing well or not doing well.”
That made him wonder: Might there be another way to provide an objective evaluation of a surgeon’s manual skills?
He and his colleagues found the answer in a new project that uses artificial intelligence to help assess a surgeon’s technique. In the project, which I wrote about in the new issue of Stanford Medicine magazine, the researchers were able to program a computer to “watch” videos of a gallbladder removal and follow the movement of the surgeon’s hands. The scientists also were able to track what surgical tools were used over time and when and how far they ranged within the surgical field.
“With that, we were able to gain a sense of a surgeon’s performance overall,” said Amy Jin, 18, the first author on a paper describing the project. Jin, who joined the Stanford project as a high school student, helped refine the algorithm that enabled the computer to recognize and track specific surgical tools.
It was a stunning accomplishment, recognized with the top research prize at a major international symposium on artificial intelligence in December 2017. Jin, attending her first-ever scientific conference, presented the work to a crowd of computer scientists and medical professionals
Arnold Milstein, MD, PhD, director of Stanford’s Clinical Excellence Research Center, said the technology could be useful not only for trainees but also for experienced surgeons.
“This could make a big difference when manual skills matter,” said Milstein, a co-author on the paper. “It provides a path for tailoring the duration of surgical training to how quickly residents learn. And it opens the way to a more objective approach to periodically certifying a surgeon’s technical skill or alerting a surgeon when he or she needs a restorative break during a long procedure.”
The researchers now are testing the technology on a much larger scale, hoping to evaluate and refine it using as many as 1,000 videos of different kinds of surgeries.
The work is a collaboration between Milstein’s group and researchers at the Stanford Artificial Intelligence Lab, including Serena Yeung, PhD, who was Jin’s mentor during the project.
Photo by Timothy Archibald