Skip to content

What were you just looking at? Oh, wait, never mind – your brain’s signaling pattern just told me

headI've blogged previously (here, here and here) about scientific developments that could be construed, to some degree, as advancing the art of mind-reading.

And now, brain scientists have devised an algorithm that spontaneously decodes human conscious thought at the speed of experience.

Well, let me qualify that a bit: In an experimental study published in PLOS Computational Biology, an algorithm assessing real-time streams of brain-activity data was able to tell with a very high rate of accuracy whether, less than half a second earlier, a person had been looking at an image of a house, an image of a face or neither.

Stanford neurosurgical resident Kai Miller, MD, PhD, along with colleagues at Stanford, the University of Washington and the Wadsworth Institute in Albany, NY, got these results by working with seven volunteer patients who had recurring epileptic seizures. These volunteers' brain surfaces had already been temporarily (and, let us emphasize, painlessly) exposed, and electrode grids and strips had been placed over various areas of their brain surfaces. This was part of an exacting medical procedure performed so that their cerebral activity could be meticulously monitored in an effort to locate the seizures' precise points of origin within each patient's brain.

In the study, the volunteers were shown images (flashed on a monitor stationed near their bedside) of houses, faces or nothing at all. From all those electrodes emanated two separate streams of data - one recording synchronized brain-cell activity, and another recording statistically random brain-cell activity - which the algorithm, designed by the researchers, combined and parsed.

The result: The algorithm could predict whether the subject had been viewing a face, house, or neither at any given millisecond. Specifically, the researchers were able to ascertain whether a "house" or "face" image or no image at all had been presented to an experimental subject roughly 400 milliseconds earlier (that's the time it takes the brain to process the image), plus or minus 20 milliseconds. The algorithm correctly nailed 96 percent of all images shown in the experiment. Moreover, it made very few lousy guesses: only one in 25 were rotten calls.

"Although this particular experiment involved only a limited set of image types, we hope the technique will someday contribute to the care of patents who've suffered neurological imagery," Miller told me.

Admittedly, that kind of guesswork gets tougher as you add more viewing possibilities - for instance, "tool" or "animal" images. So this is still what scientists call an "early days" finding: We're not exactly at the point where, come the day after tomorrow, you're walking down the street, you randomly daydream about a fish for an eighth of a second, and suddenly a giant billboard in front of you starts flashing an ad for smoked salmon.

Not yet.

Previously: Mind-reading in real life: Study shows it can be done (but they'll have to catch you first), A one-minute mind-reading machine? Brain-scan results distinguish mental states and From phrenology to neuroimaging: New finding bolsters theory about how brain operates
Photo by Kai Miller, Stanford University

Popular posts