Published by
Stanford Medicine

Author

Behavioral Science, Evolution, Imaging, Neuroscience, Research, Stanford News, Surgery

In a human brain, knowing a face and naming it are separate worries

In a human brain, knowing a face and naming it are separate worries

Alfred E. Neuman (small)Viewed from the outside, the brain’s two hemispheres look like mirror images of one another. But they’re not. For example, two bilateral brain structures called Wernicke’s area and Broca’s area are essential to language processing in the human brain – but only the ones  in the left hemisphere (at least in the great majority of right-handers’ brains; with lefties it’s a toss-up), although both sides of the brain house those structures.

Now it looks as though that right-left division of labor in our brains applies to face perception, too.

A couple of years ago I wrote and blogged about a startling study by Stanford neuroscientists Josef Parvizi, MD, PhD, and Kalanit Grill-Spector, PhD. The researchers recorded brain activity in epileptic patients who, because their seizures were unresponsive to drug therapy, had undergone a procedure in which a small section of the skulls was removed and plastic packets containing electrodes placed at the surface of the exposed brain. This was done so that, when seizures inevitably occurred, their exact point of origination could be identified. While  patients waited for this to happen, they gave the scientists consent to perform  an experiment.

In that experiment, selective electrical stimulation of another structure in the human brain, the fusiform gyrus, instantly caused a distortion in an experimental subjects’ perception of Parvizi’s face. So much so, in fact, that the subject exclaimed, “You just turned into somebody else. Your face metamorphosed!”

Like Wernicke’s and Broca’s area, the fusiform gyrus is found on each side of the brain. In animal species with brains fairly similar to our own, such as monkeys, stimulation of either the left or right fusiform gyrus appears to induce distorted face perception.

Yet, in a new study of ten such patients, conducted by Parvizi and colleagues and published in the Journal of Neuroscience,  face distortion occurred only when the right fusiform gyrus was stimulated. Other behavioral studies and clinical reports on patients suffering brain damage have shown a relative right-brain advantage in face recognition as well as a predominance of right-side brain lesions in patients with prosopagnosia, or face blindness.

Apparently, the left fusiform gyrus’s job description has changed in the course of our species’ evolution. Humans’ acquisition of language over evolutionary time, the Stanford investigators note, required the redirection of some brain regions’ roles toward speech processing. It seems one piece of that co-opted real estate was the left fusiform gyrus. The scientists suggest (and other studies hint) that along with the lateralization of language processing to the brain’s left hemisphere, face-recognition sites in that hemisphere may have been reassigned to new, language-related functions that nonetheless carry a face-processing connection: for example, retrieving the name of a person whose face you’re looking at, leaving the visual perception of that face to the right hemisphere.

My own right fusiform gyrus has been doing a bang-up job all my life and continues to do so. I wish I could say the same for my left side.

Previously: Metamorphosis: At the push of a button, a familiar face becomes a strange one, Mind-reading in real life: Study shows it can be done (but they’ll have to catch you first), We’ve got your number: Exact spot in brain where numeral recognition takes place revealed and Why memory and  math don’t mix: They require opposing states of the same brain circuitry
Photo by AlienGraffiti

Applied Biotechnology, In the News, Infectious Disease, Microbiology, Public Safety

How-to manual for making bioweapons found on captured Islamic State computer

Black DeathLast week I came across an article, in the usually somewhat staid magazine Foreign Policy, with this subhead:

Buried in a Dell computer captured in Syria are lessons for making bubonic plague bombs and missives on using weapons of mass destruction.

That got my attention. Just months ago, I’d written my own article on bioterrorism for our newspaper, Inside Stanford Medicine. So I was aware that, packaged properly, contagious one-celled pathogens can wipe out as many people as a hydrogen bomb, or more. Not only are bioweapons inexpensive (they’ve been dubbed “the poor man’s nuke”), but the raw materials that go into them – unlike those used for creating nuclear weapons – are all around us. That very ubiquity, were a bioweapon to be deployed, could make fingering the perp tough.

The focal personality in my ISM article, Stanford emergency-medicine doctor and bioterrorism expert Milana Trounce, MD, had already convinced me that producing bioweapons on the cheap – while certainly no slam-dunk – was also not farfetched. “What used to require hundreds of scientists and big labs can now be accomplished in a garage with a few experts and a relatively small amount of funding, using the know-how freely available on the internet,” she’d said.

This passage in the Foreign Policy article rendered that statement scarily apropos:

The information on the laptop makes clear that its owner is a Tunisian national named Muhammed S. who joined ISIS [which now calls itself "Islamic State"] in Syria and who studied chemistry and physics at two universities in Tunisia’s northeast. Even more disturbing is how he planned to use that education: The ISIS laptop contains a 19-page document in Arabic on how to develop biological weapons and how to weaponize the bubonic plague from infected animals.

I sent Trounce a link to the Foreign Policy article. “There’s a big difference between simply having an infectious disease agent and weaponizing it,” she responded in an email. “However, it wouldn’t be particularly difficult to get experts to help with the weaponization process. The terrorist has a picked a good infectious agent for creating a bioweapon. Plague is designated as a Category A agent along with anthrax, smallpox, tularemia, botulinum, and viral hemorrhagic fevers. The agents on the Category A list pose the highest risk to national security, because they: 1) can be easily disseminated or transmitted from person to person; 2) result in high mortality rates and have the potential for major public-health impact; 3) might cause public panic and social disruption; and 4) require special action for public-health preparedness.”

Islamic State’s interest in weaponizing bubonic plague should be taken seriously. Here’s one reason why (from my ISM article):

In 1347, the Tatars catapulted the bodies of bubonic-plague victims over the defensive walls of the Crimean Black Sea port city now called Feodosia, then a gateway to the Silk Road trade route. That effort apparently succeeded a bit too well. Some of the city’s residents escaped in sailing ships that, alas, were infested with rats. The rats carried fleas. The fleas carried Yersinia pestis, the bacterial pathogen responsible for bubonic plague. The escapees docked in various Italian ports, from which the disease spread northward over the next three years. Thus ensued the Black Death, a scourge that wiped out nearly a third of western Europe’s population.

Previously: Microbial mushroom cloud: How real is the threat of bioterrorism? (Very) and Stanford bioterrorism expert comments on new review of anthrax case
Photo by Les Haines

Behavioral Science, Chronic Disease, Mental Health, Neuroscience, Research, Stanford News

Can Alzheimer’s damage to the brain be repaired?

Can Alzheimer's damage to the brain be repaired?

repair jobIn my recent Stanford Medicine article about Alzheimer’s research, called “Rethinking Alzheimer’s,” I chronicled a variety of new approaches by Stanford scientists to nipping Alzheimer’s in the bud by discovering what’s gone wrong at the molecular level long before more obvious symptoms of the disorder emerge.

But Stanford neuroscientist Frank Longo, MD, PhD, a practicing clinician as well as a researcher, has another concern. In my article, I quoted him as saying:

Even if we could stop new Alzheimer’s cases in their tracks, there will always be patients walking in who already have severe symptoms. And I don’t think they should be forgotten.

A study by Longo and his colleagues, which just went into print in the Journal of Alzheimer’s Disease, addresses this concern. Longo has pioneered the development of small-molecule drugs that might be able to restore nerve cells frayed by conditions such as Alzheimer’s.

Nerve cells in distress can often be saved from going down the tubes if they get the right medicine. Fortunately, the brain (like many other organs in the body) makes a number of its own medicines, including ones called growth factors. Unfortunately, these growth factors are so huge that they won’t easily cross the blood-brain barrier. So, the medical/scientific establishment can’t simply synthesize them, stick them into an artery in a patient’s arm and let them migrate to the site of brain injury or degeneration and repair the damage. Plus, growth factors can affect damaged nerve cells in multiple ways, and not always benign ones.

The Longo group’s study showed that – in mice, at least -  a growth-factor-mimicking small-molecule drug (at the moment, alluded to merely by the unromantic alphanumeric LM11A-31) could counteract a number of key Alzheimer degenerative mechanisms, notably the loss of all-important contacts (called synapses) via which nerve cells transmit signals to one another.

Synapses are the soldier joints that wire together the brain’s nerve circuitry. In response to our experience, synapses are constantly springing forth, enlarging and strengthening, diminishing and weakening, and disappearing.They are crucial to memory, thought, learning and daydreaming, not to mention emotion and, for that matter, motion. So their massive loss — which in the case of Alzheimer’s disease is a defining feature – is devastating.

In addition to repairing nerve-cells, the compound also appeared to exert a calming effect on angry astrocytes and  microglia, two additional kinds of cells in the brain that, when angered, can produce inflammation and tissue damage in that organ. Perhaps most promising of all, LM11A-31 appeared to help the mice remember where things are and what nasty things to avoid.

Previously: Stanford’s brightest lights reveal new insights into early underpinnings of Alzheimer’s, Stanford neuroscientist discusses the coming dementia epidemic and Drug found effective in two mouse models of Huntington’s disease
Photo by Bruce Turner

Aging, Autoimmune Disease, Immunology, Infectious Disease, Research, Stanford News

Our aging immune systems are still in business, but increasingly thrown out of balance

Our aging immune systems are still in business, but increasingly thrown out of balance

business as usual

Stanford immunologist Jorg Goronzy, MD, told me a few years ago that a person’s immune response declines slowly but surely starting at around age 40. “While 90 percent of young adults respond to most vaccines, after age 60 that response rate is down to around 40-45 percent,” he said. “With some vaccines, it’s as low as 20 percent.”

A shaky vaccine response isn’t the only immune-system slip-up. With advancing age, we grow increasingly vulnerable to infection (whether or not we’ve been vaccinated), autoimmune disease (an immune attack on our own tissues) and cancer (when a once well-behaved cell metamorphoses into a ceaselessly dividing one).

A new study led by Goronzy and published in Proceedings of the National Academy of Sciences, suggests why that may come about. The culprit he and his colleagues have fingered turns out not to be the most likely suspect: the thymus.

This all-important organ’s job is to nurture an army of specialized  immune cells called T cells. (The “T” is for “Thymus.”) T cells are capable of recognizing and mounting an immune response to an unbelievably large number of different molecular shapes, including ones found only on invading pathogens or on our own cells when they morph into incipient tumor cells.

Exactly which feature a given T cell recognizes depends on the structure of a receptor molecule carried in abundance on that T cell’s surface.  Although each T cell sports just one receptor type, in the aggregate the number of different shapes T-cells recognize is gigantic, due to a high rate of reshuffling and mutation in the genes dictating their receptors’ makeup. (Stanford immunologist Mark Davis, PhD, perhaps more than any other single individual,  figured out in the early 1980s how this all works.)

T cells don’t live forever, and their generation from scratch completely depends on the thymus. Yet by our early teens the organ,  situated  in front of the lungs at the midpoint of our chest, starts shriveling up and replaced by (sigh – you knew this was coming)  fat tissue.

After the thymus melts away,  new T-cells come into being only when already-existing ones undergo cell division, for example to compensate for the attrition of their neighbors in one or another immune-system dormitory (such as bone marrow, spleen or a lymph node).

It’s been thought that the immune-system’s capacity to recognize and mount a response to pathogens (or incipient tumors) fades away because with age-related T-cell loss comes a corresponding erosion of diversity:  We just run out of T-cells with the appropriate receptors.

The new study found otherwise.  “Our study shows that the diversity of the human T-cell receptor repertoire is much higher than previously assumed, somewhere in the range of one billion different receptor types,” Goronzy says. “Any age-associated loss in diversity is trivial.” But the study also showed an increasing imbalance, with some subgroups of T cells (characterized by genetically identical  receptors)  hogging the show and other subgroups becoming vanishingly scarce.

The good news is that the players in an immune response are all still there, even in old age. How to restore that lost balance is the question.

Previously: How to amp up an aging immune response, Age-related drop in immune responsiveness may be reversible and Deja vu: Adults’ immune systems “remember” microscopic monsters they’ve never seen before
Photo by Lars Plougmann

Autoimmune Disease, Evolution, Immunology, Microbiology, Nutrition, Public Health, Stanford News

Civilization and its dietary (dis)contents: Do modern diets starve our gut-microbial community?

Civilization and its dietary (dis)contents: Do modern diets starve our gut-microbial community?

hunter-gatherer cafe

Our genes have evolved a bit over the last 50,000 years of human evolution, but our diets have evolved a lot. That’s because civilization has transitioned from a hunter-gatherer lifestyle to an agrarian and, more recently and incompletely, to an industrialized one. These days, many of us are living in an information-intensive, symbol-analyzing, button-pushing, fast-food-munching society. This transformation has been accompanied by consequential twists and turns regarding what we eat, and how and when we eat it.

Toss in antibiotics, sedentary lifestyles, and massive improvements in public sanitation and personal hygiene, and now you’re talking about serious shake-ups in how many and which microbes we get exposed to – and how many of which ones wind up inhabiting our gut.

In a review published in Cell Metabolism, Stanford married-microbiologist couple Justin Sonnenburg, PhD, and Erica Sonnenburg, PhD, warn that modern civilization and its dietary contents may be putting our microbial gut communities, and our health, at risk.

[S]tudies in recent years have implicated [dysfunctional gut-bug communities] in a growing list of Western diseases, such as metabolic syndrome, inflammatory bowel disease, and cancer. … The major dietary shifts occurring between the hunter-gatherer lifestyle, early Neolithic farming, and more recently during the Industrial Revolution are reflected in changes in microbial membership within dental tartar of European skeletons throughout these periods. … Traditional societies typically have much lower rates of Western diseases.

Every healthy human harbors an interactive internal ecosystem consisting of something like 1,000 species of intestinal microbes.  As individuals, these resident Lilliputians may be tiny, but what they lack in size they make up in number. Down in the lower part of your large intestine dwell tens of trillions of  single-celled creatures – a good 10 of them for every one of yours. If you could put them all on a scale, they would cumulatively weigh about four pounds. (Your brain weighs three.)

Together they do great things. In a Stanford Medicine article I wrote a few years back, “Caution: Do Not Debug,” I wrote:

The communities of micro-organisms lining or swimming around in our body cavities … work hard for their living. They synthesize biomolecules that manipulate us in ways that are helpful to both them and us. They produce vitamins, repel pathogens, trigger key aspects of our physiological development, educate our immune system, help us digest our food and for the most part get along so well with us and with one other that we forget they’re there.

But when our internal microbes don’t get enough of the right complex carbohydrates (ones we can’t digest and so pass along to our neighbors downstairs), they may be forced to subsist on the fleece of long carbohydrate chains (some call it “mucus”)  lining and guarding the intestinal wall. Weakening that barrier could encourage inflammation.

The Sonnenburgs note that certain types of fatty substances are overwhelmingly the product of carbohydrate fermentation by gut microbes. These substances have been shown to exert numerous anti-inflammatory effects in the body, possibly protecting against asthma and eczema: two allergic conditions whose incidence has soared in developed countries and seems oddly correlated with the degree to which the environment a child grows up in is spotlessly hygienic.

Previously: Joyride: Brief post-antibiotic sugar spike gives pathogens a lift, The future of probiotics and Researchers manipulate microbes in the gut
Photo by geraldbrazell

Bioengineering, Cardiovascular Medicine, Neuroscience, Research, Stanford News, Stroke

Targeted stimulation of specific brain cells boosts stroke recovery in mice

big blue brainThere are 525,949 minutes in a year. And every year, there are about 800,000 strokes in the United States – so, one stroke every 40 seconds. Aside from the infusion, within three or four hours of the stroke, of a costly biological substance called tissue plasminogen activator (whose benefit is less-than-perfectly established), no drugs have been shown to be effective in treating America’s largest single cause of neurologic disability and the world’s second-leading cause of death. (Even the workhorse post-stroke treatment, physical therapy, is far from a panacea.)

But a new study, led by Stanford neurosurgery pioneer Gary Steinberg and published in Proceedings of the National Academy of Sciences, may presage a better way to boost stroke recovery. In the study, Steinberg and his colleagues used a cutting-edge technology to directly stimulate movement-associated areas of the brains of mice that had suffered strokes.

Known as optogenetics – whose champion, Stanford psychiatrist and bioengineer Karl Deisseroth, co-authored the study – the light-driven method lets investigators pinpoint a specific set of nerve cells and stimulate only those cells. In contrast, the electrode-based brain stimulation devices now increasingly used for relieving symptoms of Parkinson’s disease, epilepsy and chronic pain also stimulate the cells’ near neighbors.

“We wanted to find out whether activating these nerve cells alone can contribute to recovery,” Steinberg told me.

As I wrote in a news release  about the study:

By several behavioral … and biochemical measures, the answer two weeks later was a strong yes. On one test of motor coordination, balance and muscular strength, the mice had to walk the length of a horizontal beam rotating on its axis, like a rotisserie spit. Stroke-impaired mice [in which the relevant brain region] was optogenetically stimulated did significantly better in how far they could walk along the beam without falling off and in the speed of their transit, compared with their unstimulated counterparts. The same treatment, applied to mice that had not suffered a stroke but whose brains had been … stimulated just as stroke-affected mice’s brains were, had no effect on either the distance they travelled along the rotating beam before falling off or how fast they walked. This suggests it was stimulation-induced repair of stroke damage, not the stimulation itself, yielding the improved motor ability.

Moreover, levels of some important natural substances called growth factors increased in a number of brain areas in  optogenetically stimulated but not unstimulated post-stroke mice. These factors are key to a number of nerve-cell repair processes. Interestingly, some of the increases occurred not only where stimulation took place but in equivalent areas on the opposite side of the brain, consistent with the idea that when we lose function on one side of the brain, the unaffected hemisphere can step in to help restore some of that lost function.

Translating these findings into human trials will mean not just brain surgery, but also gene therapy in order to introduce a critical light-sensitive protein into the targeted brain cells. Steinberg notes, though, that trials of gene therapy for other neurological disorders have already been conducted.

Previously: Brain sponge: Stroke treatment may extend time to prevent brain damage, BE FAST: Learn to recognize the signs of stroke and Light-switch seizure control? In a bright new study, researchers show how
Photo by Shutterstock.com

From August 11-25, Scope will be on a limited publishing schedule. During that time, you may also notice a delay in comment moderation. We’ll return to our regular schedule on August 25.

Aging, Genetics, Imaging, Immunology, Mental Health, Neuroscience, Research, Women's Health

Stanford’s brightest lights reveal new insights into early underpinnings of Alzheimer’s

Stanford's brightest lights reveal new insights into early underpinnings of Alzheimer's

manAlzheimer’s disease, whose course ends inexorably in the destruction of memory and reason, is in many respects America’s most debilitating disease.  As I wrote in my article, “Rethinking Alzheimer’s,” just published in our flagship magazine Stanford Medicine:

Barring substantial progress in curing or preventing it, Alzheimer’s will affect 16 million U.S. residents by 2050, according to the Alzheimer’s Association. The group also reports that the disease is now the nation’s most expensive, costing over $200 billion a year. Recent analyses suggest it may be as great a killer as cancer or heart disease.

Alarming as this may be, it isn’t the only news about Alzheimer’s. Some of the news is good.

Serendipity and solid science are prying open the door to a new outlook on what is arguably the primary scourge of old age in the developed world. Researchers have been taking a new tack – actually, more like six or seven new tacks – resulting in surprising discoveries and potentially leading to novel diagnostic and therapeutic approaches.

As my article noted, several Stanford investigators have taken significant steps toward unraveling the tangle of molecular and biochemical threads that underpin Alzheimer’s disease. The challenge: weaving those diverse strands into the coherent fabric we call understanding.

In a sidebar, “Sex and the Single Gene,” I described some new work showing differential effects of a well-known Alzheimer’s-predisposing gene on men versus women – and findings about the possibly divergent impacts of different estrogen-replacement  formulations on the likelihood of contracting dementia.

Coming at it from so many angles, and at such high power, is bound to score a direct hit on this menace eventually. Until then, the word is to stay active, sleep enough and see a lot of your friends.

Previously: The reefer connection: Brain’s “internal marijuana” signaling implicated in very earliest stages of Alzheimer’s pathology, The rechargeable brain: Blood plasma from young mice improves old mice’s memory and learning, Protein known for initiating immune response may set up our brains for neurodegenerative disease, Estradiol – but not Premain – prevents neurodegeneration in woman at heightened dementia risk and Having a copy of ApoE4 gene variant doubles Alzheimer’s risk for women, but not for men
Illustration by Gérard DuBois

Behavioral Science, Chronic Disease, Neuroscience, Pain, Research, Stanford News

Obscure brain chemical indicted in chronic-pain-induced “Why bother?” syndrome

Obscure brain chemical indicted in chronic-pain-induced "Why bother?" syndrome

why botherChronic pain, meaning pain that persists for months and months or even longer (sometimes continuing well past the time when the pain-causing injury has healed), is among the most abundant of all medical afflictions in the developed world. Estimates of the number of people with this condition in the United States alone range from 70 million to 116 million adults – in other words, as much as half the country’s adult population!

No picnic in and of itself, chronic pain piles insult on injury. It differs from a short-term episode of pain not only in its duration, but also in triggering in sufferers a kind of psychic exhaustion best described by the rhetorical question, “Why bother?”

In a new study in Science, a team led by Stanford neuroscientist Rob Malenka, MD, PhD, has identified a particular nerve-cell circuit in the brain that may explain this loss of motivation that chronic pain all too often induces. Using lab mice as test subjects, they showed that mice enduring unremitting pain lost their willingness to perform work in pursuit of normally desirable goals, just as people in chronic pain frequently do.

It wasn’t that these animals weren’t perfectly capable of carrying out the tasks they’d been trained to do, the researchers showed. Nor was it that they lost their taste for the food pellets which with they were rewarded for successful performance – if you just gave them the food, they ate every bit as much as normal mice did. But they just weren’t willing to work very hard to get it. Their murine morale was shot.

Chalk it up to the action of a mysterious substance used in the brain for god-knows-what. In our release describing the study, I explained:

Galanin is a short signaling-protein snippet secreted by certain cells in various places in the brain. While its presence in the brain has been known for a good 60 years or so, galanin’s role is not well-defined and probably differs widely in different brain structures. There have been hints, though, that galanin activity might play a role in pain. For example, it’s been previously shown in animal models that galanin levels in the brain increase with the persistence of pain.

In a surprising and promising development, the team also found that when they blocked galanin’s action in a particular brain circuit, the mice, while still in as much pain as before, were once again willing to work hard for their supper.

Surprising, because galanin is a mighty obscure brain chemical, and because its role in destroying motivation turns out to be so intimate and specific. Promising, because the discovery suggests that a drug that can inhibit galanin’s activity in just the implicated brain circuit, without messing up whatever this mystery molecule’s more upbeat functions in the brain might be, could someday succeed in bringing back that drive to accomplish things that people in chronic pain all too often lose.

Previously: “Love hormone” may mediate wider range of relationships than previously thought, Revealed: the brain’s molecular mechanism behind why we get the blues, Better than the real thing: How drugs hot-wire our brain’s reward circuitry and Stanford researchers address the complexity of chronic pain
Photo by Doug Waldron

Behavioral Science, Bioengineering, Neuroscience, Research, Stanford News, Technology

Party animal: Scientists nail “social circuit” in rodent brain (and probably ours, too)

Party animal: Scientists nail "social circuit" in rodent brain (and probably ours, too)

party animalStimulating a single nerve-cell circuit among millions in the brain instantly increases a mouse’s appetite for getting to know a strange mouse, while inhibiting it shuts down the same mouse’s drive to socialize with the stranger.

Stanford brain scientist and technology whiz Karl Deisseroth, MD, PhD, is already renowned for his role in developing optogenetics, a technology that allows researchers to turn on and turn off nerve-cell activity deep within the brain of a living, freely roving animal so they can see the effects of that switching in real time. He also pioneered CLARITY, a method of rendering the brain – at least if it’s the size of of a mouse’s – both transparent and porous so its anatomy can be charted, even down to the molecular level, in ways previously deemed unimaginable.

Now, in another feat of methodological derring-do detailed in a new study in Cell, Deisseroth and his teammates incorporated a suite of advanced lab technologies, including optogenetics as well as a couple of new tricks, to pinpoint a particular assembly of nerve cells projecting from one part to another part of the mouse brain. We humans’ brains obviously differ in some ways from those of mice. But our brains have the same connections Deisseroth’s group implicated in mice’s tendency to seek or avoid social contact. So it’s a good bet this applies to us, too.

Yes, we’d all like to be able to flip a switch and turn on our own “party animal” social circuitry from time to time. But the potential long-term applications of advances like this one are far from frivolous. The new findings may throw light on psychiatric disorders marked by impaired social interaction such as autism, social anxiety, schizophrenia and depression.From my release on this study:

“Every behavior presumably arises from a pattern of activity in the brain, and every behavioral malfunction arises from malfunctioning circuitry,” said Deisseroth, who is also co-director of Stanford’s Cracking the Neural Code Program. “The ability, for the first time, to pinpoint a particular nerve-cell projection involved in the social behavior of a living, moving animal will greatly enhance our ability to understand how social behavior operates, and how it can go wrong.”

Previously: Lightning strikes twice: Optogenetics pioneer Karl Deisseroth’s newest technique renders tissues transparent, yet structurally intact, Researchers induce social deficits associated with autism, schizophrenia in mice, Anti-anxiety circuit found in unlikelybrain region and Using light to get muscles moving
Photo by Gamerscore blog

Neuroscience, Research, Stanford News

The reefer connection: Brain’s “internal marijuana” signaling system implicated in very early stages of Alzheimer’s pathology

The reefer connection: Brain's "internal marijuana" signaling system implicated in very early stages of Alzheimer's pathology

funny brain cactusIt’s axiomatic that every psychoactive drug works by mimicking some naturally occurring, evolutionarily adaptive, brain-produced substance. Cocaine and amphetamines mimic some aspects of a signaling chemical in the brain called dopamine. Heroine, morphine, and codeine all mimic neuropeptides called endorphins.

Tetrahydrocannabinol, the active component in mariuana and hashish, is likewise a doppleganger for a set of molecules in the brain called endocannabinoids. The latter evolved not to get us high but to perform numerous important signaling functions known and unknown. One of those is, as Stanford neuroscientist Dan Madison, PhD, puts it, to “open up the learning gate.”

In a key mammalian brain structure called the hippocampus,  which serves as (among other things) a combination GPS system and memory-filing assistant, endocannabinoids act as signal boosters for a key nerve tract – akin to transformers spaced along a high-voltage electrical transmission cable.

But the endocannabinoid system is highly selective in regard to which signals it boosts. Its overall effect in the hippocampus is to separate the wheat from the chaff (or in this case, would it be appropriate to say “the leaves from the seeds and stems”?). This ensures that real information (e.g., “that looks like some food!” or “I remember being here before”) gets passed down the line to the next relay station in the brain’s information-processing assembly line.

A new study in Neuron by Madison and his colleagues shows a likely link between the brain’s endocannabinoid system and a substance long suspected of playing a major, if mysterious, role in initiating Alzheimer’s disease. As I wrote in a release accompanying the study’s publication:

A-beta — strongly suspected to play a key role in Alzheimer’s because it’s the chief constituent of the hallmark clumps dotting the brains of people with Alzheimer’s — may, in the disease’s earliest stages, impair learning and memory by blocking the natural, beneficial action of endocannabinoids in the brain.

This interference with the “learning gate” occurs when A-beta is traveling in tiny, soluble clusters of just a few molecules, long before it aggregates into those textbook clumps. So does it follow that we should all start smoking pot to prevent Alzheimer’s disease?

Hardly. Again, from my release:

Madison said it would be wildly off the mark to assume that, just because A-beta interferes with a valuable neurophysiological process mediated by endocannabinoids, smoking pot would be a great way to counter or prevent A-beta’s nefarious effects on memory and learning ability… “Endocannabinoids in the brain are very transient and act only when important inputs come in,” said Madison … “Exposure to marijuana over minutes or hours is different: more like enhancing everything indiscriminately, so you lose the filtering effect. It’s like listening to five radio stations at once.”

It may even be that A-beta (ubiquitously produced by all the body’s cells), in the right amounts at the right times, is itself performing a crucial if still obscure service: fine-tuning a process that fine-tunes another process that tweaks the circuitry of learning and remembering.

Previously: The brain makes its own Valium: Built-in seizure brake?, How villainous substance starts wrecking synapses long before clumping into Alzheimer’s plaques and Black hat in Alzheimer’s, white hat in multiple sclerosis?
Photo by Phing

Stanford Medicine Resources: