What better way to highlight how cochlear implants work than by sharing the story of 1-year-old Lucile Ross, an utterly adorable participant in Stanford’s growing new “teletherapy” program called BabyTalk?
In an Inside Stanford Medicine story published today, I write about the severely hearing-impaired Lucile and the program that helps her and 17 other toddlers learn to use their surgically imbedded cochlear implants to listen and speak. I describe what happened one day in June, about a month after her implant was first activated, when Lucile was busily eating Cheerios and suddenly grabbed her ear in astonishment. She had just heard something, prompting her hearing therapist to comment, “That was beautiful.”
My story goes on to discuss the technology behind the implants, the kinds of “sounds” that the tiny patients first hear, and how the children begin to interpret these sounds and learn how to speak:
A cochlear implant is an electronic device. One part includes electrodes that have been surgically implanted in the inner ear; another part, which sits outside the ear, includes a microphone. It can help the profoundly deaf and severely hard of hearing learn to listen and speak, and is particularly successful if implanted prior to the age of 3. The first three years after a child is born is the most critical period for the development of speech and language.
The device works by electronically transporting sounds from the microphone, which sits behind the ear, to the inner electrodes, which stimulate the auditory nerves and send sound information to the brain. The device doesn’t restore normal hearing; new recipients have described what they “hear” as sounding robotic or like ducks quacking, or just plain weird. Instead, it can give a deaf person a useful representation of sounds in the environment and help him or her to understand speech. But in order to do that, training is needed.
BabyTalk delivers that training – via the iPad – to geographically isolated patients and their families. For my story, I joined Lucile’s teacher, Sharon Nutini, at the Jean Weingarten Peninsula Oral School for the Deaf in Redwood City, Calif., Stanford’s partner in the BabyTalk program, for one of the virtual lessons with Lucile and her dad, Lyle, at home in Marin County. I had to agree with what Lucile’s mother had earlier told me over the phone: Watching Lucile learn to speak is an amazing experience. More from the piece:
”She’s giving us signs that she understands us, that she’s learning language,” Lizzie Ross said. “We think that she’s going to catch up quickly. Sharon gives us exercises that we do during the week. We sing songs that elicit certain language, we work on the recognition of animal sounds — the dog barking. There’s a lot of repetition, a lot of fun play stuff with her own toys — just what she would normally do on a daily basis.
“We knew it would be a hurdle. But for us, it’s been worth every minute of it. We were lucky that the opportunity was out there for her to hear and develop speech at a young age. It’s been a pretty amazing experience.”
Previously: Using the iPad to connect ill newborns, parents, “What’s that?” Stanford researchers identify cells important to hearing loss, Cochlear implants could help developmentally delayed infants, says Stanford/Packard study and In people born deaf, auditory cortex takes on touch and vision, study finds
Photo of Lucile and her father, Lyle, by Norbert von der Groeben