When we navigate, our brains blend cues from multiple senses to estimate our speed and position in space. How we weigh these cues depends on where we are and how fast we seem to be moving.
For example, imagine yourself making your way through a featureless desert versus along an urban street. In the desert, you might rely more on your physical motion (the number of steps you've taken), whereas in the city you might rely more on visual features to determine your location in space and rate of motion. Even within a single environment, the brain can shift its attention back and forth between these two different types of navigational cues.
If you were walking through New York City at night, you'd probably find yourself using visual features like street signs or buildings to determine how fast you're traveling, as well as your direction and position in space. But you'd probably never rely on visual features like the moon, which moves so slightly relative to your movement down the street that, for all practical purposes, it conveys no information about how fast you're traveling, the direction you are headed, or your current position.
While we know our brain can dynamically switch between the cues it uses to estimate our position in space, the underlying principles, circuits and algorithms by which it does so remained unknown. To answer these questions, Stanford neuroscientist Lisa Giocomo, PhD, and her colleagues examined the navigational behavior and brain-activity patterns of mice traveling through a virtual reality environment.
In a study published in Nature Neuroscience, they found that as the speed of the visual scene in which a mouse was immersed increased, the mouse's behavior and neural activity started to "care" more about visual cues rather than locomotion cues. The researchers were able to generate a computational model to mathematically define the point at which the mouse's behavior and brain switch from using locomotion cues to visual cues.
"We've shown that mice actually estimate their velocity based on locomotion or visual cues depending on whether visual information moves slow or fast," says Giacomo. "This provides a new framework in which to understand how complex cues are integrated in the brain to form a unified estimate of our position in space."
Plus, it gives us something to think about during that split second when we're shifting from warp-speed to a sudden screeching stop while navigating among four-wheeled objects in our never-ending battle to avoid getting squashed in one of the recurring traffic jams that spice up our commute home from work.
Photo by nameng