Skip to content

A look at intelligent listening technologies from Stanford Medicine

Researchers are using AI listening technologies to improve mental-health, diagnose autism and discover adverse drug reactions.

Researchers here are using intelligent listening technologies, natural language processing, machine learning and data mining to deliver better health care. In my story "All Ears," in the latest issue of Stanford Medicine magazine, I provide an overview of three of these breaking-the-sound-barrier projects.

The first is a conversational artificial intelligent agent, similar to Siri and Alexa, that automates some aspects of mental-health assessment and treatment through a website. Stanford researchers are currently laying the scientific foundation for online services that are evidence-based, private and designed with underserved communities in mind.

For another project, researchers have developed a simple set of rules for diagnosing autism based on speech patterns and other behaviors that can be observed in a short home video. By applying these rules, three untrained observers have been able to diagnose the condition in young children with 90 percent accuracy.

Last but not least, a Stanford dermatologist and bioinformatics professor have figured out how to analyze mentions of skin problems among 8 million online discussions posted by people taking specific anticancer drugs. Using deep-learning software algorithms, the researchers identified a previously undetected drug adverse effect, successfully demonstrating a way to improve health outcomes and reduce the societal costs of drug side effects.

The scientific groundwork laid by these studies can be applied to many other conditions, providing intelligent ways to reduce medical costs and make quality care accessible to more people.

Illustration by David Plunkert

Popular posts