People have been imagining what would happen if we stuck computers in our brains for a surprisingly long time — since at least 1879 in fact, when Edward Page Mitchell first published “The Ablest Man in the World,” in which a man becomes a political genius by dint of a clockwork-based, intelligence-enhancing machine implanted under his skull.
As wild as Mitchell’s vision is, what’s perhaps most striking in retrospect is what it leaves out — namely, any mention of how a machine would communicate with the brain. Today, brain-computer or brain-machine interfaces are a reality, and, as I describe in a new feature story, making them a reality depended intimately on getting better at interpreting the language of the brain.
Of course, not everyone agrees on how much of the language of the brain one needs to understand, or whether it’s even feasible to do more than pick out a few pieces here and there. In one of the last interviews I did before the story was published, I was walking with Paul Nuyujukian, MD, PhD, from his office to his lab and talking about the language metaphor, when he stopped. He wanted to emphasize that to develop prosthetic devices and to hopefully treat neurological disease in the future, he may not need to know very much of the local dialect, so to speak, or to understand completely everything that is happening in one region of the brain, to make progress.
Nuyujukian’s research to date has focused on extracting signals from the brain that people could then use to control mouse pointers on a computer screen:
[a] task, as Nuyujukian put it, [that] was a bit like listening to a hundred people speaking a hundred different languages all at once and then trying to find something, anything, in the resulting din one could correlate with a person’s intentions.
Nuyujukian’s point isn’t that there’s no value in better understanding the language of the brain — there is, he says, but for now, teasing out one little sliver may be enough. That may not be the case, however, for all prosthetics or all diseases.
[O]ther tasks will require greater fluency, at least according to E.J. Chichilnisky [PhD], a professor of neurosurgery and of ophthalmology, who thinks speaking the brain’s language will be essential when it comes to helping the blind to see. Chichilnisky, the John R. Adler Professor, co-leads the NeuroTechnology Initiative, funded by the Stanford Neuroscience Institute, and he and his lab are working on sophisticated technologies to restore sight to people with severely damaged retinas – a task he said will require listening closely to what individual neurons have to say, and then being able to speak to each neuron in its own language.
There’s a lot more to the brain-machine interface story than I had room to talk about in the feature — for one thing, there wasn’t any space to tell you about the exciting things that are happening in developing new physical interfaces with the brain — but I want to draw attention to two other pieces that you shouldn’t miss.
My colleagues and I wondered whether people around Stanford University actually knew what a brain-machine interface was — so I grabbed my recorder and my microphone and walked around campus for a while asking people about it. I got more than I expected, including some important ideas about the broader social issues that brain-machine interfaces raise.
Inspired in part by those conversations, I wrote a companion piece on the ethics of developing brain-machine interfaces. As you might expect, there are some serious concerns about where the field is headed, but also some real enthusiasm for the possibilities the future will bring.
Previously: The story behind the development of a brain-computer interface and Retina fixes: Two Stanford scientists are developing devices to restore vision
Photo by Guo Mong