Published by
Stanford Medicine

Author

Immunology, Infectious Disease, Microbiology, Public Health, Research, Stanford News

Paradox: Antibiotics may increase contagion among Salmonella-infected animals

Paradox: Antibiotics may increase contagion among Salmonella-infected animals

cattleMake no mistake: Antibiotics have worked wonders, increasing human life expectancy as have few other public-health measures (let’s hear it for vaccines, folks). But about 80 percent of all antibiotics used in the United States are given to livestock – chiefly chickens, pigs, and cattle – at low doses, which boosts the animals’ growth rates. A long-raging debate in the public square concerns the possibility that this widespread practice fosters the emergence of antibiotic-resistant bugs.

But a new study led by Stanford bacteriologist Denise Monack, PhD, and just published in Proceedings of the National Academy of Sciences, adds a brand new wrinkle to concerns about the broad administration of antibiotics: the possibility that doing so may, at least  sometimes, actually encourage the spread of disease.

Take salmonella, for example. One strain of this bacterial pathogen, S. typhimurium, is responsible for an estimated 1 million cases of food poisoning, 19,000 hospitalizations and nearly 400 deaths annually in the United States. Upon invading the gut, S. typhimurium produces a potent inflammation-inducing endotoxin known as LPS.

Like its sister strain S. typhi (which  causes close to 200,00o typhoid-fever deaths worldwide per year), S. typhimurium doesn’t mete out its menace equally. While most get very sick, it is the symptom-free few who, by virtue of shedding much higher levels of disease-causing bacteria in their feces, account for the great majority of transmission. (One asymptomatic carrier was the infamous Typhoid Mary, a domestic cook who, early in the 20th century, cheerfully if unknowingly spread her typhoid infection to about 50 others before being forcibly, and tragically, quarantined for much of the rest of her life.)

You might think giving antibiotics to livestock, whence many of our S. typhi-induced food-poisoning outbreaks derive, would kill off the bad bug and stop its spread from farm animals to those of us (including me) who eat them. But maybe not.

From our release on the study:

When the scientists gave oral antibiotics to mice infected with Salmonella typhimurium, a bacterial cause of food poisoning, a small minority — so called “superspreaders” that had been shedding high numbers of salmonella in their feces for weeks — remained healthy; they were unaffected by either the disease or the antibiotic. The rest of the mice got sicker instead of better and, oddly, started shedding like superspreaders. The findings … pose ominous questions about the widespread, routine use of sub-therapeutic doses of antibiotics in livestock.

So, the superspreaders kept on spreading without missing a step, and the others became walking-dead pseudosuperspreaders. A lose-lose scenario all the way around.

“If this holds true for livestock as well – and I think it will – it would have obvious public health implications,” Monack told me. “We need to think about the possibility that we’re not only selecting for antibiotic-resistant microbes, but also impairing the health of our livestock and increasing the spread of contagious pathogens among them and us.”

Previously: Did microbes mess with Typhoid Mary’s macrophages?, Joyride: Brief post-antibiotic sugar spike gives pathogens a lift and What if gut-bacteria communities “remember” past antibiotic exposures?
Photo by Jean-Pierre

Big data, Bioengineering, NIH, Research, Science Policy, Stanford News

$23 million in NIH grants to Stanford for two new big-data-crunching biomedical centers

$23 million in NIH grants to Stanford for two new big-data-crunching biomedical centers

More than $23 million in grants from the National Institutes of Health – courtesy of the NIH’s Big Data to Knowledge (BD2K) initiative – have launched two Stanford-housed centers of excellence bent on enhancing scientists’ capacity to compare, contrast and combine study results in order to draw more accurate conclusions, develop superior medical therapies and understand human behaviors.

Huge volumes of biomedical data – some of it from carefully controlled laboratory studies, increasing amounts of it in the form of electronic health records, and a building torrent of data from wearable sensors – languish in isolated locations and, even when researchers can get their hands on them, are about as comparable as oranges and orangutans. These gigantic banks of data, all too often, go unused or at least underused.

But maybe not for long. “The proliferation of devices monitoring human activity, including mobile phones and an ever-growing array of wearable sensors, is generating unprecedented quantities of data describing human movement, behaviors and health,” says movement-disorders expert Scott Delp, PhD, director of the new National Center for Mobility Data Integration to Insight, also known as the Mobilize Center. “With the insights gained from subjecting these massive amounts of data to  state-of-the-art analytical techniques, we hope to enhance mobility across a broad segment of the population,” Delp told me.

Directing the second grant recipient, the Center for Expanded Data and Retrieval (or CEDAR), is Stanford’s Mark Musen, MD, PhD, a world-class biomedical-computation authority. As I wrote in an online story:

[CEDAR] will address the need to standardize descriptions of diverse biomedical laboratory studies and create metadata templates for detailing the content and context of those studies. Metadata consists of descriptions of how, when and by whom a particular set of data was collected; what the study was about; how the data are formatted; and what previous or subsequent studies along similar lines have been undertaken.

The ultimate goal is to concoct a way to translate the banter of oranges and orangutans, artichokes and aardvarks now residing in a global zoo (or is it a garden?) of diverse databases into one big happy family speaking the same universal language, for the benefit of all.

Previously: NIH associate director for data science on the importance of “data to the biomedicine enterprise”, Miniature wireless device aids pain studies and Stanford bioengineers aim to better understand, treat movement disorders

Big data, Chronic Disease, Immunology, Research, Stanford News

Out of hiding: Found lurking in public databases, type-2 diabetes drug passes early test

Out of hiding: Found lurking in public databases, type-2 diabetes drug passes early test

lurking 3Way too often, promising-looking basic-research findings – intriguing drug candidates, for example – go swooshing down the memory hole, and you never hear anything about them again. So it’s nice when you see researchers following up on an upbeat early finding with work that moves a potential drug to the next peg in the development process. All the more so when the drug candidate targets a massively prevalent disorder.

Type 2 diabetes affects more than 370 million people worldwide, a mighty big number and a mighty big market for drug companies. (Unlike the much less common type 1-diabetes, where the body’s production of the hormone insulin falters and sugar builds up in the blood instead of being taken up by cells throughout the body, in type-2 diabetes insulin production may be fine but tissues become resistant to insulin.) But while numerous medications are available, none of them decisively halt progression, much less reverse the disease’s course.

About two-and-a-half years ago, Stanford data-mining maven Atul Butte, MD, PhD, combed huge publicly available databases, pooled results from numerous studies and, using big-data statistical methods, fished out a gene that had every possibility of being an important player in type 2 diabetes, but had been totally overlooked. (For more info, see this news release.) Called CD44,  this gene is especially active in fat tissue of insulin-resistant people and, Butte’s study showed, had a strong statistical connection to type-2 diabetes.

Butte’s study suggested that CD44′s link to type-2 diabetes was not just statistical but causal: In other words, manipulating the protein CD44 codes for might influence the course of the disease. By chance, that protein has already been much studied by immunologists for totally unrelated reasons. The serendipitous result is that a monoclonal antibody that binds to the protein and inhibits its action was already available.

So, Butte and his colleagues used that antibody in tests they performed on lab mice bioengineered to be extremely susceptible to type-2 diabetes, or what passes for it in a mouse. And, it turns out, the CD44-impairing antibody performed comparably to or better than two workhorse diabetes medications (metformin and pioglitazone) in countering several features of type 2 diabetes, including fatty liver, high blood sugar, weight gain and insulin resistance. The results appear in a study published today in the journal Diabetes.

Most exciting of all: In targeting CD44, the monoclonal antibody was working quite differently from any of the established drugs used for type-2 diabetes.

These are still early results, which will have to be replicated and – one hopes – improved on, first in other animal studies and finally in a long stretch of clinical trials before any drug aimed at CD44 can join the pantheon of type-2 diabetes medications. In any case, for a number of reasons the monoclonal antibody Butte’s team pitted against CD44 is far from perfect for clinical purposes. But refining initial “prototypes” is standard operating procedure for drug developers. So here’s hoping a star is born.

Previously: Newly identified type-2 diabetes gene’s odds of being a false finding equal one in 1 followed by 19 zeroes, Nature/nurture study of type-2 diabetes risk unearths carrots as potential risk reducers and Mining medical discoveries from a mountain of ones and zeroes
Photo by Dan-Scape.co.uk

Big data, Research, Science, Stanford News, Technology

Gamers: The new face of scientific research?

Gamers: The new face of scientific research?

gamerMuch has been written about the lack of reproducibility of results claimed by even well-meaning, upright scientists. Notably, a 2005 PLoS paper (by Stanford health-research policy expert John Ioannidis, MD, DSci) with the unforgettable title, “Why Most Published Research Findings Are False”, has been viewed more than a million times.

Who knew that relief could come in the form of hordes of science-naive gamers?

The notion of crowdsourcing difficult scientific problems is no longer breaking news. A few years ago I wrote a story about Stanford biochemist Rhiju Das, PhD, who was using an interactive online videogame called EteRNA he’d co-invented to come up with potential structures for RNA molecules.

RNA is a wiggly wonder. Chemically similar to DNA but infinitely more flexible and mobile, RNA can and does perform all kinds of critical tasks within every living cell. Scientists are discovering more about RNA’s once-undreamed of versatility on a steady basis. RNA may even have been around before DNA was, making it the precursor that gave rise to all life on our planet.

But EteRNA gamers need know nothing about RNA, or even about biology. They just need to be puzzle-solvers willing to learn and follow the rules of the game. Competing players’ suggested structures for a given variety of RNA molecule are actually tested in Das’s laboratory to see whether they, indeed, stably fold into the predicted structures.

More than 150,000 gamers have registered on EteRNA; at any given moment, there are about 40 active players plugging away at a solution. Several broadly similar games devoted to pursuing biological insights through crowdsourcing  are also up and running.

Das and EteRNA’s co-inventor, Adrien Treuille, PhD, (now at Carnegie Mellon University) think the gaming approach to biology offers some distinct – and to many scientists, perhaps unexpected – advantages over the more-traditional scientific method by which scientists solve problems: form a hypothesis, rigorously test it in your lab under controlled conditions, and keep it all to yourself until you at last submit your methods, data and conclusions to a journal for peer review and, if all goes well, publication.

In this “think piece” article in Trends in Biochemical Sciences,  Treuille and Das write:

Despite an elaborate peer review system, issues such as data manipulation, lack of reproducibility, lack of predictive tests, and cherry-picking among numerous unreported data occur frequently and, in some fields, may be pervasive.

There is an inherent hint of bias, the authors note, in the notion of fitting one’s data to a hypothesis: It’s always tempting to report or emphasize only data that fits your hypothesis or, conversely, look at the data you’ve produced and then tailor the “hypothesis” accordingly (thereby presenting a “proof” that may never be independently and rigorously tested experimentally).

Das and Treuille argue that the “open laboratory” nature of online games prevents data manipulation, allows rapid tests of reproducibility, and “requires rigorous adherence to the scientific method: a nontrivial prediction or hypothesis must precede each experiment.”

Das says, “It only recently hit us that EteRNA, despite being a game, is an unusually rigorous way to do science.”

Previously: John Ioaniddis discusses the popularity of his paper examining the reliability of scientific researchHow a community of online gamers is changing basic biomedical researchParamecia PacMan: Researchers create video games using living organisms and Mob science: Video game, EteRNA, lets amateurs advance RNA research
Photo by Radly J Phoenix

Applied Biotechnology, Genetics, In the News, Nutrition, Public Health, Research

“Frankenfoods” just like natural counterparts, health-wise (at least if you’re a farm animal)

"Frankenfoods" just like natural counterparts, health-wise (at least if you're a farm animal)

cow2More than a hundred billion farm animals have voted with their feet (or their hoofs, as the case may be). And the returns are in: Genetically modified meals are causing them zero health problems.

Many a word has been spilled in connection with the scientific investigation of crops variously referred to as “transgenic,” “bioengineered,” “genetically engineered” or “genetically modified.” In every case, what’s being referred to is an otherwise ordinary fruit, vegetable, or fiber source into which genetic material from a foreign species has been inserted for the purpose of making that crop, say, sturdier or  more drought- or herbicide- or pest-resistant.

Derided as “Frankenfoods” by critics, these crops have been accused of everything from being responsible for a very real global uptick in allergic diseases to causing cancer and autoimmune disease. But (flying in the face of the first accusation) allergic disorders are also rising in Europe, where genetically modified, or GM, crops’ usage is far less widespread than in North America. It’s the same story with autoimmune disease. And claims of a link between genetically modified crops and tumor formation have been backed by scant if any evidence; one paper making such a claim  got all the way through peer review and received a fair amount of Internet buzz before it was ignominiously retracted last year.

But a huge natural experiment to test GM crops’ safety has been underway for some time. Globally, between 70 and 90 percent of all GM foods are consumed by domesticated animals grown by farmers and ranchers. More than 95 percent of such animals – close to 10 billion of them – in the United States alone consume feed containing GM  components.

This was, of course, not the case before the advent of commercially available GM feeds in the 1990s. And U.S. law has long required scrupulous record-keeping concerning the health of animals grown for food production. This makes possible a before-and-after comparison.

In a just-published article in the Journal of Animal Science, University of California-Davis scientists performed a massive review of data available on performance and health of animals consuming feed containing GM ingredients and  products derived from them. The researchers conclude that there’s no evidence of GM products exerting negative health effects on livestock. From the study’s abstract:

Numerous experimental studies have consistently revealed that the performance and health of GE-fed animals are comparable with those fed [otherwise identical] non-[GM] crop lines. Data on livestock productivity and health were collated from publicly available sources from 1983, before the introduction of [GM] crops in 1996, and subsequently through 2011, a period with high levels of predominately [GM] animal feed. These field data sets representing over 100 billion animals following the introduction of [GM]crops did not reveal unfavorable or perturbed trends in livestock health and productivity. No study has revealed any differences in the nutritional profile of animal products derived from[GM]-fed animals.

In other words, the 100 billion GM-fed animals didn’t get sick any more frequently, or in different ways. No noticeable difference at all.

Should that surprise us? We humans are, in fact, pretty transgenic ourselves. About 5 percent of our own DNA can be traced to viruses who deposited their  genes in our genomes, leaving them behind as reminders of the viral visitations. I suppose that’s a great case against cannibalism if you fear GM foods. But I can think of other far more valid arguments to be made along those lines.

Previously: Ask Stanford Medicine: Pediatric immunologist answers your questions about food allergy research, Research shows little evidence that organic foods are more nutritional than conventional ones and Stanford study on the health benefits of organic food: What people are saying
Photo by David B. Gleason

Clinical Trials, Immunology, Pain, Research, Stanford News, Surgery, Technology

Discovery may help predict how many days it will take for individual surgery patients to bounce back

Discovery may help predict how many days it will take for individual surgery patients to bounce back

pandaPost-surgery recovery rates, even from identical procedures, vary widely from patient to patient. Some feel better in a week. Others take a month to get back on their feet. And – until now, anyway – nobody has been able to accurately predict how quickly a given surgical patient will start feeling better. Docs don’t know what to tell the patient, and the patient doesn’t know what to tell loved ones or the boss.

Worldwide, hundreds of millions of surgeries are performed every year. Of those, tens of millions are major ones that trigger massive inflammatory reactions in patients’ bodies. As far as your immune system is concerned, there isn’t any difference between a surgical incision and a saber-tooth tiger attack.

In fact, that inflammatory response is a good thing whether the cut came from a surgical scalpel or a tiger’s tooth, because post-wound inflammation is an early component of the healing process. But when that inflammation hangs on for too long, it impedes rather than speeds healing. Timing is everything.

In a study just published in Science Translational Medicine, Stanford researchers under the direction of perioperative specialist Martin Angst, MD, and immunology techno-wizard Garry Nolan, PhD, have identified an “immune signature” common to all 32 patients they monitored before and after those patients had hip-replacement surgery. This may permit reasonable predictions of individual patients’ recovery rates.

In my news release on this study, I wrote:

The Stanford team observed what Angst called “a very well-orchestrated, cell-type- and time-specific pattern of immune response to surgery.” The pattern consisted of a sequence of coordinated rises and falls in numbers of diverse immune-cell types, along with various changes in activity within each cell type.

While this post-surgical signature showed up in every single patient, the magnitude of the various increases and decreases in cell numbers and activity varied from one patient to the next. One particular factor – changes, at one hour versus 24 hours post-surgery, in the activation states of key interacting proteins inside a small set of “first-responder” immune cells – accounted for 40-60 percent of the variation in the timing of these patients’ recovery.

That robust correlation dwarfs those observed in earlier studies of the immune-system/recovery connection – probably because such previous studies have tended to look at, for example, levels of one or another substance or cell type in a blood sample. The new method lets scientists simultaneously score dozens of identifying surface features and goings-on inside cells, one cell at a time.

The Stanford group is now hoping to identify a pre-operation immune signature that predicts the rate of recovery, according to Brice Gaudilierre, MD, PhD, the study’s lead author. That would let physicians and patients know who’d benefit from boosting their immune strength beforehand (there do appear to be some ways to do that), or from pre-surgery interventions such as physical therapy.

This discovery isn’t going to remain relevant only to planned operations. A better understanding, at the cellular and molecular level, of how immune response drives recovery from wounds may also help emergency clinicians tweak a victim’s immune system after an accident or a saber-tooth tiger attack.

Previously: Targeting stimulation of specific brain cells boosts stroke recovery in mice, A closer look at Stanford study on women and pain and New device identifies immune cells at an unprecedented level of detail, inside and out
Photo by yoppy

Behavioral Science, Evolution, Imaging, Neuroscience, Research, Stanford News, Surgery

In a human brain, knowing a face and naming it are separate worries

In a human brain, knowing a face and naming it are separate worries

Alfred E. Neuman (small)Viewed from the outside, the brain’s two hemispheres look like mirror images of one another. But they’re not. For example, two bilateral brain structures called Wernicke’s area and Broca’s area are essential to language processing in the human brain – but only the ones  in the left hemisphere (at least in the great majority of right-handers’ brains; with lefties it’s a toss-up), although both sides of the brain house those structures.

Now it looks as though that right-left division of labor in our brains applies to face perception, too.

A couple of years ago I wrote and blogged about a startling study by Stanford neuroscientists Josef Parvizi, MD, PhD, and Kalanit Grill-Spector, PhD. The researchers recorded brain activity in epileptic patients who, because their seizures were unresponsive to drug therapy, had undergone a procedure in which a small section of the skulls was removed and plastic packets containing electrodes placed at the surface of the exposed brain. This was done so that, when seizures inevitably occurred, their exact point of origination could be identified. While  patients waited for this to happen, they gave the scientists consent to perform  an experiment.

In that experiment, selective electrical stimulation of another structure in the human brain, the fusiform gyrus, instantly caused a distortion in an experimental subjects’ perception of Parvizi’s face. So much so, in fact, that the subject exclaimed, “You just turned into somebody else. Your face metamorphosed!”

Like Wernicke’s and Broca’s area, the fusiform gyrus is found on each side of the brain. In animal species with brains fairly similar to our own, such as monkeys, stimulation of either the left or right fusiform gyrus appears to induce distorted face perception.

Yet, in a new study of ten such patients, conducted by Parvizi and colleagues and published in the Journal of Neuroscience,  face distortion occurred only when the right fusiform gyrus was stimulated. Other behavioral studies and clinical reports on patients suffering brain damage have shown a relative right-brain advantage in face recognition as well as a predominance of right-side brain lesions in patients with prosopagnosia, or face blindness.

Apparently, the left fusiform gyrus’s job description has changed in the course of our species’ evolution. Humans’ acquisition of language over evolutionary time, the Stanford investigators note, required the redirection of some brain regions’ roles toward speech processing. It seems one piece of that co-opted real estate was the left fusiform gyrus. The scientists suggest (and other studies hint) that along with the lateralization of language processing to the brain’s left hemisphere, face-recognition sites in that hemisphere may have been reassigned to new, language-related functions that nonetheless carry a face-processing connection: for example, retrieving the name of a person whose face you’re looking at, leaving the visual perception of that face to the right hemisphere.

My own right fusiform gyrus has been doing a bang-up job all my life and continues to do so. I wish I could say the same for my left side.

Previously: Metamorphosis: At the push of a button, a familiar face becomes a strange one, Mind-reading in real life: Study shows it can be done (but they’ll have to catch you first), We’ve got your number: Exact spot in brain where numeral recognition takes place revealed and Why memory and  math don’t mix: They require opposing states of the same brain circuitry
Photo by AlienGraffiti

Applied Biotechnology, In the News, Infectious Disease, Microbiology, Public Safety

How-to manual for making bioweapons found on captured Islamic State computer

Black DeathLast week I came across an article, in the usually somewhat staid magazine Foreign Policy, with this subhead:

Buried in a Dell computer captured in Syria are lessons for making bubonic plague bombs and missives on using weapons of mass destruction.

That got my attention. Just months ago, I’d written my own article on bioterrorism for our newspaper, Inside Stanford Medicine. So I was aware that, packaged properly, contagious one-celled pathogens can wipe out as many people as a hydrogen bomb, or more. Not only are bioweapons inexpensive (they’ve been dubbed “the poor man’s nuke”), but the raw materials that go into them – unlike those used for creating nuclear weapons – are all around us. That very ubiquity, were a bioweapon to be deployed, could make fingering the perp tough.

The focal personality in my ISM article, Stanford emergency-medicine doctor and bioterrorism expert Milana Trounce, MD, had already convinced me that producing bioweapons on the cheap – while certainly no slam-dunk – was also not farfetched. “What used to require hundreds of scientists and big labs can now be accomplished in a garage with a few experts and a relatively small amount of funding, using the know-how freely available on the internet,” she’d said.

This passage in the Foreign Policy article rendered that statement scarily apropos:

The information on the laptop makes clear that its owner is a Tunisian national named Muhammed S. who joined ISIS [which now calls itself "Islamic State"] in Syria and who studied chemistry and physics at two universities in Tunisia’s northeast. Even more disturbing is how he planned to use that education: The ISIS laptop contains a 19-page document in Arabic on how to develop biological weapons and how to weaponize the bubonic plague from infected animals.

I sent Trounce a link to the Foreign Policy article. “There’s a big difference between simply having an infectious disease agent and weaponizing it,” she responded in an email. “However, it wouldn’t be particularly difficult to get experts to help with the weaponization process. The terrorist has a picked a good infectious agent for creating a bioweapon. Plague is designated as a Category A agent along with anthrax, smallpox, tularemia, botulinum, and viral hemorrhagic fevers. The agents on the Category A list pose the highest risk to national security, because they: 1) can be easily disseminated or transmitted from person to person; 2) result in high mortality rates and have the potential for major public-health impact; 3) might cause public panic and social disruption; and 4) require special action for public-health preparedness.”

Islamic State’s interest in weaponizing bubonic plague should be taken seriously. Here’s one reason why (from my ISM article):

In 1347, the Tatars catapulted the bodies of bubonic-plague victims over the defensive walls of the Crimean Black Sea port city now called Feodosia, then a gateway to the Silk Road trade route. That effort apparently succeeded a bit too well. Some of the city’s residents escaped in sailing ships that, alas, were infested with rats. The rats carried fleas. The fleas carried Yersinia pestis, the bacterial pathogen responsible for bubonic plague. The escapees docked in various Italian ports, from which the disease spread northward over the next three years. Thus ensued the Black Death, a scourge that wiped out nearly a third of western Europe’s population.

Previously: Microbial mushroom cloud: How real is the threat of bioterrorism? (Very) and Stanford bioterrorism expert comments on new review of anthrax case
Photo by Les Haines

Behavioral Science, Chronic Disease, Mental Health, Neuroscience, Research, Stanford News

Can Alzheimer’s damage to the brain be repaired?

Can Alzheimer's damage to the brain be repaired?

repair jobIn my recent Stanford Medicine article about Alzheimer’s research, called “Rethinking Alzheimer’s,” I chronicled a variety of new approaches by Stanford scientists to nipping Alzheimer’s in the bud by discovering what’s gone wrong at the molecular level long before more obvious symptoms of the disorder emerge.

But Stanford neuroscientist Frank Longo, MD, PhD, a practicing clinician as well as a researcher, has another concern. In my article, I quoted him as saying:

Even if we could stop new Alzheimer’s cases in their tracks, there will always be patients walking in who already have severe symptoms. And I don’t think they should be forgotten.

A study by Longo and his colleagues, which just went into print in the Journal of Alzheimer’s Disease, addresses this concern. Longo has pioneered the development of small-molecule drugs that might be able to restore nerve cells frayed by conditions such as Alzheimer’s.

Nerve cells in distress can often be saved from going down the tubes if they get the right medicine. Fortunately, the brain (like many other organs in the body) makes a number of its own medicines, including ones called growth factors. Unfortunately, these growth factors are so huge that they won’t easily cross the blood-brain barrier. So, the medical/scientific establishment can’t simply synthesize them, stick them into an artery in a patient’s arm and let them migrate to the site of brain injury or degeneration and repair the damage. Plus, growth factors can affect damaged nerve cells in multiple ways, and not always benign ones.

The Longo group’s study showed that – in mice, at least -  a growth-factor-mimicking small-molecule drug (at the moment, alluded to merely by the unromantic alphanumeric LM11A-31) could counteract a number of key Alzheimer degenerative mechanisms, notably the loss of all-important contacts (called synapses) via which nerve cells transmit signals to one another.

Synapses are the soldier joints that wire together the brain’s nerve circuitry. In response to our experience, synapses are constantly springing forth, enlarging and strengthening, diminishing and weakening, and disappearing.They are crucial to memory, thought, learning and daydreaming, not to mention emotion and, for that matter, motion. So their massive loss — which in the case of Alzheimer’s disease is a defining feature – is devastating.

In addition to repairing nerve-cells, the compound also appeared to exert a calming effect on angry astrocytes and  microglia, two additional kinds of cells in the brain that, when angered, can produce inflammation and tissue damage in that organ. Perhaps most promising of all, LM11A-31 appeared to help the mice remember where things are and what nasty things to avoid.

Previously: Stanford’s brightest lights reveal new insights into early underpinnings of Alzheimer’s, Stanford neuroscientist discusses the coming dementia epidemic and Drug found effective in two mouse models of Huntington’s disease
Photo by Bruce Turner

Aging, Autoimmune Disease, Immunology, Infectious Disease, Research, Stanford News

Our aging immune systems are still in business, but increasingly thrown out of balance

Our aging immune systems are still in business, but increasingly thrown out of balance

business as usual

Stanford immunologist Jorg Goronzy, MD, told me a few years ago that a person’s immune response declines slowly but surely starting at around age 40. “While 90 percent of young adults respond to most vaccines, after age 60 that response rate is down to around 40-45 percent,” he said. “With some vaccines, it’s as low as 20 percent.”

A shaky vaccine response isn’t the only immune-system slip-up. With advancing age, we grow increasingly vulnerable to infection (whether or not we’ve been vaccinated), autoimmune disease (an immune attack on our own tissues) and cancer (when a once well-behaved cell metamorphoses into a ceaselessly dividing one).

A new study led by Goronzy and published in Proceedings of the National Academy of Sciences, suggests why that may come about. The culprit he and his colleagues have fingered turns out not to be the most likely suspect: the thymus.

This all-important organ’s job is to nurture an army of specialized  immune cells called T cells. (The “T” is for “Thymus.”) T cells are capable of recognizing and mounting an immune response to an unbelievably large number of different molecular shapes, including ones found only on invading pathogens or on our own cells when they morph into incipient tumor cells.

Exactly which feature a given T cell recognizes depends on the structure of a receptor molecule carried in abundance on that T cell’s surface.  Although each T cell sports just one receptor type, in the aggregate the number of different shapes T-cells recognize is gigantic, due to a high rate of reshuffling and mutation in the genes dictating their receptors’ makeup. (Stanford immunologist Mark Davis, PhD, perhaps more than any other single individual,  figured out in the early 1980s how this all works.)

T cells don’t live forever, and their generation from scratch completely depends on the thymus. Yet by our early teens the organ,  situated  in front of the lungs at the midpoint of our chest, starts shriveling up and replaced by (sigh – you knew this was coming)  fat tissue.

After the thymus melts away,  new T-cells come into being only when already-existing ones undergo cell division, for example to compensate for the attrition of their neighbors in one or another immune-system dormitory (such as bone marrow, spleen or a lymph node).

It’s been thought that the immune-system’s capacity to recognize and mount a response to pathogens (or incipient tumors) fades away because with age-related T-cell loss comes a corresponding erosion of diversity:  We just run out of T-cells with the appropriate receptors.

The new study found otherwise.  “Our study shows that the diversity of the human T-cell receptor repertoire is much higher than previously assumed, somewhere in the range of one billion different receptor types,” Goronzy says. “Any age-associated loss in diversity is trivial.” But the study also showed an increasing imbalance, with some subgroups of T cells (characterized by genetically identical  receptors)  hogging the show and other subgroups becoming vanishingly scarce.

The good news is that the players in an immune response are all still there, even in old age. How to restore that lost balance is the question.

Previously: How to amp up an aging immune response, Age-related drop in immune responsiveness may be reversible and Deja vu: Adults’ immune systems “remember” microscopic monsters they’ve never seen before
Photo by Lars Plougmann

Stanford Medicine Resources: