Published by
Stanford Medicine

Author

Behavioral Science, Bioengineering, Neuroscience, Research, Stanford News, Technology

Party animal: Scientists nail “social circuit” in rodent brain (and probably ours, too)

Party animal: Scientists nail "social circuit" in rodent brain (and probably ours, too)

party animalStimulating a single nerve-cell circuit among millions in the brain instantly increases a mouse’s appetite for getting to know a strange mouse, while inhibiting it shuts down the same mouse’s drive to socialize with the stranger.

Stanford brain scientist and technology whiz Karl Deisseroth, MD, PhD, is already renowned for his role in developing optogenetics, a technology that allows researchers to turn on and turn off nerve-cell activity deep within the brain of a living, freely roving animal so they can see the effects of that switching in real time. He also pioneered CLARITY, a method of rendering the brain – at least if it’s the size of of a mouse’s – both transparent and porous so its anatomy can be charted, even down to the molecular level, in ways previously deemed unimaginable.

Now, in another feat of methodological derring-do detailed in a new study in Cell, Deisseroth and his teammates incorporated a suite of advanced lab technologies, including optogenetics as well as a couple of new tricks, to pinpoint a particular assembly of nerve cells projecting from one part to another part of the mouse brain. We humans’ brains obviously differ in some ways from those of mice. But our brains have the same connections Deisseroth’s group implicated in mice’s tendency to seek or avoid social contact. So it’s a good bet this applies to us, too.

Yes, we’d all like to be able to flip a switch and turn on our own “party animal” social circuitry from time to time. But the potential long-term applications of advances like this one are far from frivolous. The new findings may throw light on psychiatric disorders marked by impaired social interaction such as autism, social anxiety, schizophrenia and depression.From my release on this study:

“Every behavior presumably arises from a pattern of activity in the brain, and every behavioral malfunction arises from malfunctioning circuitry,” said Deisseroth, who is also co-director of Stanford’s Cracking the Neural Code Program. “The ability, for the first time, to pinpoint a particular nerve-cell projection involved in the social behavior of a living, moving animal will greatly enhance our ability to understand how social behavior operates, and how it can go wrong.”

Previously: Lightning strikes twice: Optogenetics pioneer Karl Deisseroth’s newest technique renders tissues transparent, yet structurally intact, Researchers induce social deficits associated with autism, schizophrenia in mice, Anti-anxiety circuit found in unlikelybrain region and Using light to get muscles moving
Photo by Gamerscore blog

Neuroscience, Research, Stanford News

The reefer connection: Brain’s “internal marijuana” signaling system implicated in very early stages of Alzheimer’s pathology

The reefer connection: Brain's "internal marijuana" signaling system implicated in very early stages of Alzheimer's pathology

funny brain cactusIt’s axiomatic that every psychoactive drug works by mimicking some naturally occurring, evolutionarily adaptive, brain-produced substance. Cocaine and amphetamines mimic some aspects of a signaling chemical in the brain called dopamine. Heroine, morphine, and codeine all mimic neuropeptides called endorphins.

Tetrahydrocannabinol, the active component in mariuana and hashish, is likewise a doppleganger for a set of molecules in the brain called endocannabinoids. The latter evolved not to get us high but to perform numerous important signaling functions known and unknown. One of those is, as Stanford neuroscientist Dan Madison, PhD, puts it, to “open up the learning gate.”

In a key mammalian brain structure called the hippocampus,  which serves as (among other things) a combination GPS system and memory-filing assistant, endocannabinoids act as signal boosters for a key nerve tract – akin to transformers spaced along a high-voltage electrical transmission cable.

But the endocannabinoid system is highly selective in regard to which signals it boosts. Its overall effect in the hippocampus is to separate the wheat from the chaff (or in this case, would it be appropriate to say “the leaves from the seeds and stems”?). This ensures that real information (e.g., “that looks like some food!” or “I remember being here before”) gets passed down the line to the next relay station in the brain’s information-processing assembly line.

A new study in Neuron by Madison and his colleagues shows a likely link between the brain’s endocannabinoid system and a substance long suspected of playing a major, if mysterious, role in initiating Alzheimer’s disease. As I wrote in a release accompanying the study’s publication:

A-beta — strongly suspected to play a key role in Alzheimer’s because it’s the chief constituent of the hallmark clumps dotting the brains of people with Alzheimer’s — may, in the disease’s earliest stages, impair learning and memory by blocking the natural, beneficial action of endocannabinoids in the brain.

This interference with the “learning gate” occurs when A-beta is traveling in tiny, soluble clusters of just a few molecules, long before it aggregates into those textbook clumps. So does it follow that we should all start smoking pot to prevent Alzheimer’s disease?

Hardly. Again, from my release:

Madison said it would be wildly off the mark to assume that, just because A-beta interferes with a valuable neurophysiological process mediated by endocannabinoids, smoking pot would be a great way to counter or prevent A-beta’s nefarious effects on memory and learning ability… “Endocannabinoids in the brain are very transient and act only when important inputs come in,” said Madison … “Exposure to marijuana over minutes or hours is different: more like enhancing everything indiscriminately, so you lose the filtering effect. It’s like listening to five radio stations at once.”

It may even be that A-beta (ubiquitously produced by all the body’s cells), in the right amounts at the right times, is itself performing a crucial if still obscure service: fine-tuning a process that fine-tunes another process that tweaks the circuitry of learning and remembering.

Previously: The brain makes its own Valium: Built-in seizure brake?, How villainous substance starts wrecking synapses long before clumping into Alzheimer’s plaques and Black hat in Alzheimer’s, white hat in multiple sclerosis?
Photo by Phing

Health Policy, Infectious Disease, Microbiology, Public Health, Stanford News

Microbial mushroom cloud: How real is the threat of bioterrorism? (Very)

Microbial mushroom cloud: How real is the threat of bioterrorism? (Very)

Dr. Milana Trounce, M.D. teaches a class on the the risks of bioterror at the Stanford School of Medicine. Photo taken on Monday, April 21, 2014. ( Norbert von der Groeben/ Stanford School of Medicine )

“What if nuclear bombs could reproduce? Get your hands on one today, and in a week’s time you’ve got a few dozen.”

That’s the lead sentence of a feature article I just wrote for Inside Stanford Medicine. The answer is, bombs can’t reproduce. But something just as potentially deadly – and a whole lot easier to come by – can, and does.

What I learned in the course of writing the feature, titled “How contagious pathogens could lead to nuke-level casualties” (I encourage you to take a whack at it), was bracing. Stanford surgeon Milana Trounce, MD, who specializes in emergency medicine, has been teaching a course that pulls together students, faculty and outside experts from government, industry and academia. Her goal is to raise awareness and inspire collaborations on the thorny multidisciplinary problems posed by the very real prospect that somebody, somewhere, could very easily be producing enough killer germs to wipe out huge numbers of people – numbers every bit as large as those we’ve come to fear in the event of a nuclear attack.

Among those I quote in the article are infectious-disease expert David Relman, MD, and biologist/applied physicist Steven Block, PhD, both of whom have sat in on enough closed-door meetings to know that bioterrorism is something we need to take seriously.

Not only do nukes not reproduce. They don’t leap from stranger to stranger, or lurk motionless in midair or on fingertips. Nor can they be fished from soil and streams or cheaply conjured up in a clandestine lab in someone’s basement or backyard.  One teaspoon of the toxin produced by the naturally occurring bacterial pathogen Clostridium botulinum is enough to kill several hundreds of thousands of people. That’s particularly scary when you consider that this toxin – better known by the nickname “Botox” -  is already produced commercially for sale to physicians who inject it into their patients’ eyebrows.

As retired Rear Adm. Ken Bernard, MD, a former special assistant on biosecurity matters to Presidents Bill Clinton and George W. Bush and a guest speaker for Trounce’s Stanford course, put it: “Who can be sure there’s no off-site, illegal production? Suppose a stranger were to say, ‘I want 5 grams — here’s $500,000’?

That’s five grams, as in one teaspoon. As I just mentioned, we’re talking hundreds of thousands of people killed, if this spoonful were to, say, find its way into just the right point in the milk supply chain (the point where loads of milk from numerous scattered farms get stored in huge holding tanks before being parsed out to myriad delivery trucks). That’s pretty stiff competition for a hydrogen bomb. For striking terror into our hearts, the only thing bioweapons lack is branding – nothing tops that mushroom-cloud logo.

Previously: Stanford bioterrorism experts comments on new review of anthrax case and Show explores scientific questions surrounding 2001 anthrax attacks
Photo of Milana Trounce by Norbert von der Groeben

Fertility, Health and Fitness, Men's Health, Public Health, Research, Stanford News

Poor semen quality linked to heightened mortality rate in men

Poor semen quality linked to heightened mortality rate in men

sperm graffitiMen with multiple defects in their semen appear to be at increased risk of dying sooner than men with normal semen, according to a study of some  12,000 men who were evaluated at two different centers specializing in male-infertility problems.

In that study, led by Michael Eisenberg, MD, PhD, Stanford’s director of male reproductive medicine and surgery, men with more than one such defect such as reduced total semen volume, low sperm counts or motility, or aberrant sperm shape were more than twice as likely to die, over a seven-and-a-half-year follow-up period, than men found to be free of such issues.

Given that one in seven couples in developed countries encounter fertility problems at some point, Eisenberg told me, a two-fold increase in mortality rates qualifies as a serious health issue. As he told me for an explanatory release I wrote about the study:

“Smoking and diabetes — either of which doubles mortality risk — both get a lot of attention… But here we’re seeing the same doubled risk with male infertility, which is relatively understudied.”

Moreover, the difference was statistically significant, despite the fact that relatively few men died, due primarily to their relative youth (typically between 30 and 40 years old) when first evaluated. And the difference persisted despite the researchers’ efforts to control for differences in health status and age between the two groups.

Eisenberg has previously found that childless men are at heightened risk of death from cardiovascular disease and that men with low sperm production face increased cancer risk.

Previously:  Men with kids are at lower risk of dying from cardiovascular disease than their childless counterparts and Low sperm count can mean increased cancer risk
Photo by Grace Hebert

Aging, Mental Health, Neuroscience, Research, Stanford News, Stem Cells

The rechargeable brain: Blood plasma from young mice improves old mice’s memory and learning

The rechargeable brain: Blood plasma from young mice improves old mice's memory and learning

brain battery“Maybe Ponce de Leon should have considered becoming a vampire,” I noted here a few years ago. In a related Stanford Medicine article, I elaborated on that point (i.e. Dracula may have been on to something):

Count Dracula may have been bloodthirsty, but nobody ever called him stupid. If that practitioner of what you could call “the Transylvanian transfusion” knew then what we know now, it’s a good bet he was keeping his wits as sharp as his teeth by restricting his treats to victims under the age of 30.

I was referring then to an amazing discovery by Stanford brain-degeneration expert Tony Wyss-Coray, PhD, and his then-graduate student Saul Villeda, PhD, who now has his own lab at the University of California-San Francisco. They’d found that something in an old mouse’s blood could somehow exert an aging effect on the capabilities of a young mouse’s brain, and you know that ain’t good. They’d even pinpointed one specific substance (eotaxin) behind this effect, implying that inhibiting this naturally produced and sometimes very useful chemical’s nefarious action – or, if you’re a vampire, laying off the old juice and  getting your kicks from preteens when available – might be beneficial to aging brains.

But I was premature. While the dynamic duo had shown that old blood is bad for young brains and had also demonstrated that old mice’s brains produce more new nerve cells (presumably a good thing) once they’ve had continuous exposure to young mice’s blood, the researchers hadn’t yet definitively proven that the latter translated into improved intellectual performance.

This time out they’ve gone and done just that, in a study (subscription required) published online yesterday in Nature Medicine. First they conducted tricky, sophisticated experiments to show that when the old mice were continuously getting blood from young mice, an all-important region in a mouse’s brain (and yours) called the hippocampus perks up biochemically, anatomically and physiologically: It looks and acts more like a younger mouse’s hippocampus. That’s big, because the hippocampus is not only absolutely essential to the formation of new memories but also the first brain region to go when the early stirrings of impending dementia such as Alzheimer’s start subtly eroding brain function, long before outwardly observable symptoms appear.

Critically, when Wyss-Coray, Villeda and their comrades then administered a mousey IQ test (a standard battery of experiments measuring mice’s ability to learn and remember) to old mice who’d been injected with plasma (the cell-free part of blood) from healthy young mice, the little codgers far outperformed their peers who got crummy old-mouse plasma instead.

Slam dunk.

“This could have been done 20 years ago,” Wyss-Coray told me when I was assembling my release on this study. “You don’t need to know anything about how the brain works. You just give an old mouse young blood and see if the animal is smarter than before. It’s just that nobody did it.”

Previously: When brain’s trash collectors fall down on the job, neurodegeneration risk picks up, Brain police: Stem cells’ fecund daughters also boss other cells around, Old blood + young brain = old brain and Might immune response to viral infections slow birth of new nerve cells in brain?
Photo by Takashi Hososhima

Cancer, Genetics, Public Health, Research, Stanford News, Technology

Odd couples: Resemblances at molecular level connect diseases to unexpected, predictive traits

Odd couples: Resemblances at molecular level connect diseases to unexpected, predictive traits

odd coupleStanford big-data king Atul Butte, MD, PhD, has made a career out of mining publicly available databases to unearth novel and frequently surprising relationships between, for example, diseases and drugs, nature and nurture, and pain and sexual status.

In his latest Big Dig, Butte (along with his colleagues) has combed through mountains of electronically available data to identify molecular idiosyncrasies linking specific diseases to easily observed traits that on first glance wouldn’t be thought to have any such connection. The results, written up in a study published in Science Translational Medicine, may allow relatively non-invasive predictions of impending disorders.

For example, who would think that magnesium levels in the blood might be an early-warning marker for gastric cancer? Or that platelet counts in a blood sample would predict a coming diagnosis of alcohol dependency? Or that a high PSA reading, typically associated with potential prostate cancer, would turn out to be predictive of lung cancer? Or that a high red-blood-cell count might presage the development of actute lymphoblastic leukemia?

Answer: No one. That’s the beauty of Big Data. You find out stuff you were never specifically looking for in the first place. It just pops out at you in the form of a high, if initially inexplicable, statistical correlation.

But by cross-referencing voluminous genetic data implicating particular gene variants in particular diseases with equally voluminous data associating the same gene variants with other, easily measured traits typically considered harmless, Butte and his associates were able to pick out a number of such connections, which they then explored further by accessing anonymized electronic medical records from Stanford Hospital and Clinics, Columbia University, and Mount Sinai School of Medicine. “We indeed found that some of these interesting genetic-based predictions actually held up,” Butte told me.

Because checking blood levels of one or another substance is far simpler and less invasive than doing a biopsy, and because altered levels of the substance may appear well before observable disease symptoms, this approach may lead to early, more inclusive and less expensive diagnostic procedures.

Butte is one of the speakers at Stanford’s upcoming Big Data in Biomedicine conference. Registration for the May 21-24 event is open on the conference website.

Previously: Nature/nurture study of type 2 diabetes risk unearths carrots as potential risk reducers, Mining medical discoveries from a mountain of ones and zeroes, Newly identified type-2 diabetes gene’s odds of being a false finding equal one in 1 followed by 19 zeroes, Women report feeling more pain than men, huge EMR analysis shows and Cheap Data! Stanford scientists’ “opposites attract” algorithm plunders public databases, scores surprising drug-disease hook-ups
Photo by cursedthing

Applied Biotechnology, Clinical Trials, FDA, Public Health, Research, Stanford News

The best toxicology lab: a mouse with a human liver

The best toxicology lab: a mouse with a human liver

of mice and menA few years ago, Stanford pharmacogenomic expert Gary Peltz, MD, PhD, collaborating with researchers in Japan, developed a line of bioengineered mice whose livers were largely replaced with human liver cells that recapitulate the architecture and function of a human liver. Now, in a recent study published in PLoS Medicine, Peltz’s team has shown that routine use of this altered lab mouse in standard toxicology tests preceding clinical trials would save human lives.

Among the liver’s numerous other job responsibilities, one of the most important is chemically modifying drugs in various ways to make them easier for the body to get rid of. But some of those chemical products, or metabolites, can themselves be quite toxic if they reach high levels before they’ve been excreted.

The Food and Drug Administration requires that prior to human testing, a drug’s toxicological potential be assessed in at least two mammalian species. But we humans metabolize things differently from other mammals, because our livers are different. That can make for nasty surprises. All too often, drugs showing tremendous promise in preclinical animal assessments fail in human trials due to unforeseen liver toxicity, said Peltz, a former pharmaceutical executive who is intimately familiar with established preclinical testing procedures in the industry.

That’s what happened in 1993 when, after a short safety trial of a drug called FIAU concluded without incident, the comp0und was placed in a phase-2 clinical trial of a drug for hepatitis B. FIAU belongs to a class of drugs that can interfere with viral replication, so it was considered a great candidate for treating virally induced infections such as hepatitis B.

As I wrote in my release about the new study:

“FIAU was supposed to be a revolutionary drug,” Peltz said. “It looked very promising in preclinical tests. In phase 1, when the drug was administered to subjects for a short period of time, the human subjects seemed to do fairly well.” But the phase-2 trial was stopped after 13 weeks, when it became clear that FIAU was destroying patients’ livers.

In fact, nearly half the patients treated with FIAU in that trial died from complications of liver damage. Yet, before advancing to clinical trials FIAU had been tested for as long as six months in mice, rats, dogs and monkeys without any trace of toxicity. An investigation conducted by the National Academy of Sciences later determined that the drug had shown no signs of being dangerous during those rigorous preclinical toxicology tests.

In Peltz’s new study, though, FIAU caused unmistakable early signs of  severe liver toxicity in the bioengineered mice with human livers. This observation would have served as a bright red stop signal that would have prevented the drug from being administered to people.

Bonus item: Using bioengineered mice with human livers instead of mice with murine ones would no doubt result in the clinical and commercial success of some drugs that never got to the human-testing stage because they caused liver toxicity in mice.

Previously: Fortune teller: Mice with ‘humanized’ livers predict HCV drug candidate’s behavior in humans, Alchemy: From liposuction fluid to new liver cells and Immunology escapes from the mouse trap
Photo by erjkprunczyk

Aging, Genetics, Men's Health, Neuroscience, Research, Stanford News, Women's Health

Having a copy of ApoE4 gene variant doubles Alzheimer’s risk for women but not for men

Having a copy of ApoE4 gene variant doubles Alzheimer's risk for women but not for men

brain cactus - smallSince the early 1990s, when Duke University neurologist Allen Roses, MD, first broke the news, it’s been known that a person carrying the gene variant known as ApoE4 is at elevated risk of getting Alzheimer’s disease. To this day ApoE4 is the strongest known single genetic risk factor for Alzheimer’s, a progressive neurological syndrome that robs its victims of their memory and reasoning ability.

But only now is it looking certain that the increased Alzheimer’s risk ApoE4 confers is largely restricted to women. Men’s fates don’t seem to be altered nearly as much by the genetic bad penny that is ApoE4, according to a new Annals of Neurology study led by Mike Greicius, MD, medical director of the Stanford Center for Memory Disorders.

Accessing two huge publicly available national databases, Greicius and his colleagues were able to amass medical records for some 8,000 people and show that initially healthy ApoE4-positive women were twice as likely to contract Alzheimer’s as their ApoE4-negative counterparts, while ApoE4-positive men’s risk for the syndrome was barely higher than that for ApoE-negative men.

What the heck is ApoE4 for, anyway? In my release on the new study, I wrote:

The ApoE gene is a recipe for a protein important for shuttling fatty substances throughout the body. This is particularly important in the central nervous system, as brain function depends on rapid rearrangement of such fatty substances along and among nerve cell membranes. The ApoE gene comes in three varieties — ApoE2, ApoE3 and ApoE4 — depending on inherited variations in the gene’s sequence. As a result, the protein that the gene specifies also comes in three versions, whose structures and fatty-substance-shuttling performance differ. Most people carry two copies of the ApoE3 gene variant (one from each parent). But about one in five people carries at least one copy of ApoE4, and a small percentage have two ApoE4 copies. Numerous studies … have confirmed that ApoE4 is a key risk factor for Alzheimer’s disease, with a single copy of ApoE4 increasing that risk twofold or fourfold. Carrying two copies confers 10 times the risk of Alzheimer’s.

Early hints in the medical literature that the ApoE4 variant exerted differential effects on women’s versus men’s brains were largely ignored until now, says Greicius. He says that’s because most of the seminal ApoE4/Alzheimer’s genetics research was conducted as case-control studies: The ApoE4 gene version’s frequency in people with Alzheimer’s was compared to its frequency in people without the disease. (About half of those with Alzheimer’s, but only about 15 percent without it, are positive for ApoE4.)

But that method has limitations, says Greicius: “About 10-15 percent of ‘normal’ 70-year-olds will develop Alzheimer’s if you wait five or ten years.” Their lurking in the “normal” group dilutes the results. Moreover, Greicius says,“these kinds of genetic studies are looking for needles in a haystack, so they require large numbers of subjects – thousands – to achieve statistical significance. If you want to further examine male/female differences, you have to double the sample size.” That’s costly.

And that’s how come the large government- and industry-supported repositories to which Greicius and his team resorted are such a great idea.

Previously: Estradiol – but not Premarin – prevents neurodegeneration in women at heightened dementia risk, Common genetic Alzheimer’s risk factor disrupts healthy older women’s brain function, but not men’s, Hormone therapy halts accelerated biological aging seen in women with Alzheimer’s genetic risk factor and A one-minute mind-reading machine? Brain-scan results distinguish mental states
Photo by Sean Michael Ragan

Aging, Genetics, Neuroscience, Research, Sleep, Stanford News

Restless legs syndrome, most common in old age, appears to be programmed in the womb

Restless legs syndrome, most common in old age, appears to be programmed in the womb

Restless legsWhile the sleep disorder called “restless legs syndrome” is more typical of older than younger people, it looks as though it’s programmed in the womb. And a group led by Stanford neurologist Juliane Winkelmann, MD, has pinpointed for the first time the anatomical region in the brain where the programming takes place.

Restless legs syndrome, or RLS, is just what it sounds like: a pattern of unpleasant sensations in the legs and the urge to move them. It has been described as a feeling similar to the urge to yawn, except that it’s situated in the legs or arms instead of the upper torso and head.

Estimates vary, but something on the order of one in ten Americans has RLS. Women are twice as likely as men, and older people more likely than young people, to have it. This urge to move around comes in the evening or nighttime, and can be relieved only by – wait for it – moving around. Needless to say, that can cause sleep disturbances. In addition, RLS can lead to depression, anxiety and increased cardiovascular risk.

Very little is known about what actually causes RLS, although it’s known to be highly heritable. Although a number of gene variants (tiny glitches in a person’s DNA sequence) associated with the condition have been discovered, each by itself appears to contribute only a smidgeon of the overall effect, and nobody knows how.

Winkelmann has been exploring the genetic underpinnings of RLS at length and in depth. In a just-published paper in Genome Research, she and her colleagues have shown that one gene variant in particular depresses the expression of a protein involved in organ development and maintenance. The DNA abnormality Winkelmann’s team zeroed in on occurs not on the gene’s coding sequence – the part of the gene that contains the recipe for the protein for which the gene is a blueprint – but rather on a regulatory sequence: a part of the gene that regulates how much of that protein (in this case, the one involved in organ development and maintenance) gets made, and when.

The kicker (pardon my pun) is that the regulatory sequence in question seems to be active only during early brain development and only in a portion of brain that is destined to become the basal ganglia, a brain region well known to be involved in movement.

“Minor alterations in the developing forebrain during early embryonic development are probably leading to a predisposition in the [basal ganglion],” Winkelmann says. “Later in life, during aging, and together with environmental factors, these may lead to the manifestation of the disease.”

(Wondering if you’ve got RLS? Check this out.)

Previously: National poll reveals sleep disorders, use of sleeping aids among ethnic groups, Caucasian women most likely to have restless leg syndrome
Photo by Maxwell Hamilton

Global Health, Immunology, Infectious Disease, Microbiology, Research, Stanford News

Discovered: Why so many people with schistosomiasis (there’s a lot of them) are so vulnerable to bacterial co-infection

Discovered: Why so many people with schistosomiasis (there's a lot of them) are so vulnerable to bacterial co-infection

More than a billion people worldwide – almost all of them in developing countries – are infected by worm-like parasitic organisms called helminths. Organisms making up just a single genus of helminth, Schistosoma, account for one-quarter of those infections, which damage different body parts depending on what schistosomal species is doing the infecting. Some go for the lung. Others (card-carrying members of the species Schistosoma haematobium) head for the urinary tract, with one in ten infected patients suffering severe physical consequences.

People with schistosomiasis of the urinary tract are especially vulnerable to bacterial co-infections. Worse, these co-infections exacerbate an already heightened risk of bladder cancer in infected individuals, it’s believed. Unfortunately, considering the massive numbers of cases, surprisingly little is understood about the molecular aspects of the infection’s course.

A big reason for that relative ignorance has been the absence of an effective animal model enabling the detailed study of urinary-tract schistosomiasis. A couple of years ago, Stanford schistosomiasis expert Mike Hsieh, MD, PhD, developed the world’s first decent mouse model for the disease, allowing him to explore the molecular pathology that occurs early in the course of infection. Now, in a just-published study in Infection and Immunity, Hsieh has put that mouse model to work in coaxing out the cause of the curious collegiality of S. haematobium and co-infecting bacteria.

The secret, the scientists learned, is that S. haemotobium infection induces a spike in levels of a circulating immune-system signaling protein, or cytokine, called IL-4. That excess, in turn, results in a drop in the number and potency of a subset of immune cells that are important in fighting off bacterial infections. The discovery opens a pathway toward the development of new, non-antibiotic drug treatments for co-infected patients that won’t wreak havoc with their microbiomes, as antibiotics typically do.

Previously: Is the worm turning? Early stages of schistosomiasis bladder infection charted, Neglected story of schistosomiasis in Ghana, as told in a  sand animation and A good mouse model for a bad worm

Stanford Medicine Resources: