A piece published today in the New York Times examines the importance of replicating the sensitivity of human touch in designing the next generation of robots. Noting that the Stanford Artificial Intelligence Laboratory designed the first robotic arm in the 1960s, reporter John Markoff offers a look at ongoing research around campus, and elsewhere, involving robotics:
Consider Dr. Nikolas Blevins, a head and neck surgeon at Stanford Health Care who routinely performs ear operations requiring that he shave away bone deftly enough to leave an inner surface as thin as the membrane in an eggshell.
Dr. Blevins is collaborating with the roboticists J. Kenneth Salisbury andSonny Chan on designing software that will make it possible to rehearse these operations before performing them. The program blends X-ray andmagnetic resonance imaging data to create a vivid three-dimensional model of the inner ear, allowing the surgeon to practice drilling away bone, to take a visual tour of the patient’s skull and to virtually “feel” subtle differences in cartilage, bone and soft tissue. Yet no matter how thorough or refined, the software provides only the roughest approximation of Dr. Blevins’s sensitive touch.
“Being able to do virtual surgery, you really need to have haptics,” he said, referring to the technology that makes it possible to mimic the sensations of touch in a computer simulation.
Markoff goes on to discuss advances in haptics, “a science that is playing an increasing role in connecting the computing world to humans.”
Previously: Stanford surgeon uses robot to increase precision, reduce complications of head and neck procedures, CyberKnife: From promising technique to proven tumor treatment and Stanford researchers develop flexible electronic skin