Published by
Stanford Medicine

Author

Genetics, History, Immunology, Research, Science, Stanford News

Knight in lab: In days of yore, postdoc armed with quaint research tools found immunology’s Holy Grail

Knight in lab: In days of yore, postdoc armed with quaint research tools found immunology's Holy Grail

charging knightA human has only about 25,000 genes. So, it’s tough to imagine just how our immune systems can manage to recognize potentially billions of differently shaped microbial or tumor-cell body parts. But that’s precisely what our immune systems have to do, and with exquisite precision, in order to stomp invading pathogens and wanna-be cancer cells and leave the rest of our bodies the heck alone.

How do they do it?

Stanford immunologist Mark Davis, PhD, tore the cover off of immunology in the early 1980s by solving that riddle. As I wrote in  “The Swashbuckler,” an article in the latest issue of Stanford Medicine, T cells are one of two closely related, closely coordinated workhorse-warrior cell types that deserve much of the credit for the vertebrate immune system’s knack of carefully picking bad guys of various stripes out of the lineup and attacking them:

[Q]uite similar in many respects, B cells and T cells are more like fraternal than identical twins. B cells are specialized to find strange cells and strange substances circulating in the blood and lymph. T cells are geared toward inspecting our own cells for signs of harboring a virus or becoming cancerous. So it’s not surprising that the two cell types differ fundamentally in the ways they recognize their respective targets. B cells’ antibodies recognize the three-dimensional surfaces of molecules. T cells recognize one-dimensional sequences of protein snippets, called peptides, on cell surfaces. All proteins in use in a cell eventually get broken down into peptides, which are transported to the cell surface and displayed in molecular jewel cases that evolution has optimized for efficient inspection by patrolling T cells. Somehow, our inventory of B cells generates antibodies capable of recognizing and binding to a seemingly infinite number of differently shaped biological objects. Likewise, our bodies’ T-cell populations can recognize and respond to a vast range of different peptide sequences.

In the late 1970s, scientists (including then-graduate student Davis, who is now director of Stanford’s Institute for Immunity, Transplantation and Infection) unraveled the genetic quirks behind B cells’ ability to recognize a mind-blowingly diverse  set of different pathogens’ and tumor-cells’ characteristic molecular shapes. As a follow-on, Davis and a handful of colleagues – working with what would today be considered the most primitive of molecular-biology tools – isolated the gene underlying the T-cell receptor: an idiosyncratic and very important surface protein that is overwhelmingly responsible for T cells’ recognition of myriad pathogen- and cancer-cell-specific peptide sequences. And they figured out how it works.

The result? (Again from my article:)

With the T-cell receptor gene in hand, scientists can now routinely sort, scrutinize, categorize and utilize T cells to learn about the immune system and work toward improving human health. Without it, they’d be in the position of a person trying to recognize words by the shapes of their constituent letters instead of by phonetics.

Previously: Stanford Medicine magazine traverses the immune systemBest thing since sliced bread? A (potential) new diagnostic for celiac disease, Deja vu: Adults’ immune systems “remember” microscopic monsters they’ve never seen before, Immunology escapes from the mousetrap, Immunology meets infotech and Mice to men: Immunological research vaults into the 21st century
Photo by davidmclaughlin

Aging, Imaging, Ophthalmology, Patient Care, Research, Stanford News

New way to predict advance of age-related macular degeneration

New way to predict advance of age-related macular degeneration

eyeballAge-related macular degeneration, in which the macula – the key area of the retina responsible for vision – begins to degenerate, is the leading cause of blindness and central vision loss among adults older than 65. Some 10-15 million Americans suffer from the disease.

If those numbers don’t scare you, try these: “It affects 14%-24% of the U.S. population aged 65-74 years and 35 -40% of people aged 74 years or more have the disease.” Yow!

Most cases of AMD don’t lead to blindness. But if the disorder progresses to an advanced stage where abnormal blood vessels accumulate underneath the macula and leak blood and fluid, irreversible damage to the macula can quickly ensue if treatment doesn’t arrive right on time.

Timing that treatment just right is a real issue. As I wrote in my recent release about a promising development in this field:

[U]ntil now, there has been no effective way to tell which individuals with AMD are likely to progress to the wet stage. Current treatments are costly and invasive – they typically involve injections of medicines directly into the eyeball – making the notion of treating people with early or intermediate stages of AMD a non-starter. Doctors and patients have to hope the next office visit will be early enough to catch wet AMD at its onset, before it takes too great a toll.

Here’s the good news: A team led by Stanford radiologist and biomedical informatician Daniel Rubin, MD, has found a new way to forecast which patients with age-related macular degeneration are likely to progress to the most debilitating form of the disease – and when.

The advance, chronicled in a study in Investigative Ophthalmology & Visual Science, is a formula – derived from extensive computer analysis of thousands of retinal scans of hundreds of patients’ eyes – that recommends, on a personalized basis,  when to schedule an individual patient’s next office visit in order to optimize the prospect of catching AMD progression before it causes blindness.

The formula predicts, with high accuracy, whether and when a patient with mild or intermediate AMD will progress to the dangerous advanced stage. And it does so simply by crunching imaging data that is already commonly collected in eye doctors’ offices anyway.

“Our technique involves no new procedures in the doctor’s office – patients get the same care they’ve been getting anyway,” Rubin told me. His team just tacked on a sophisticated, computerized image-processing step.

Previously: Treating common forms of blindness using tissue generated with ink-jet printing technology, To maintain good eyesight, make healthy vision a priority and Stanford researchers develop web-based tool to streamline interpretation of medical images
Image courtesy of Daniel Rubin

Aging, Chronic Disease, Clinical Trials, Immunology, Research, Stanford News

Is osteoarthritis an inflammatory disorder? New thinking gets clinical test

Is osteoarthritis an inflammatory disorder? New thinking gets clinical test

SM arthritis imageOsteoarthritis sort of comes with the territory of aging. If you live long enough, you’ll probably get it.

For those fortunate enough not to have a working acquaintance with the disease, I describe its onset in a just-published Stanford Medicine article, “When Bones Collide”:

You start to feel some combination of pain, stiffness and tenderness in a thumb, a knee, a hip, a toe or perhaps your back or neck. It takes root, settles in and, probably, gets worse. And once you’ve got it, it never goes away. Eventually, it can get tough to twist off a bottle cap or to get around, depending on the joint or joints affected.

All too many of us, of course, are perfectly familiar with the symptoms of osteoarthritis. An estimated 27 million people in the United States have been diagnosed with it. By 2030, due mainly to the aging of the population, the number will be more like 50 million. Anything so common is all too easy to look at as inevitable: basically, the result of the same kind of wear and tear on your joints that causes the treads on a commuter car’s set of tires to disappear eventually.

But Stanford rheumatologists Bill Robinson, MD, PhD, and Mark Genovese, MD, think that just may not be the way it works. Almost four years ago I wrote about Robinson’s discovery that osteoarthritis is propelled by a sequence of inflammatory events similar to ones associated with Alzheimer’s disease, cardiovascular disease, and type-2 diabetes. That discovery and a steady stream of follow-up work in his lab have spawned a clinical trial, now underway and led by Genovese, to see if a regimen of anti-inflammatory medicines that’s been shown to roll back osteoarthritis’s progression in mice can do the same thing in people.

That’s the kind of progress most of us could live without.

Previously: New thinking about osteoarthritis, older people’s nemesis and Inflammation, not just wear and tear, spawn arthritis
Illustration by Jeffrey Decoster

Imaging, Immunology, Infectious Disease, Neuroscience, Research, Stanford News

Some headway on chronic fatigue syndrome: Brain abnormalities pinpointed

Some headway on chronic fatigue syndrome: Brain abnormalities pinpointed

patchbrainHow can you treat a disease when you don’t know what causes it? Such a mystery disease is chronic fatigue syndrome, which not so long ago was written off by many physicians as a psychiatric phenomenon because they just couldn’t figure out what else might be behind it. No one was even able to identify an anatomical or physiological “signature” of the disorder that could distinguish it from any number of medical lookalikes.

“If you don’t understand the disease, you’re throwing darts blindfolded,” Stanford neuroradiologist Mike Zeineh, MD, PhD, told me about a week ago. Zeineh is working to rip that blindfold from CFS researchers’ eyes.

From a release I wrote about some breaking CFS research by Zeineh and his colleagues:

CFS affects between 1 million and 4 million individuals in the United States and millions more worldwide. Coming up with a more precise number of cases is tough because it’s difficult to actually diagnose the disease. While all CFS patients share a common symptom — crushing, unremitting fatigue that persists for six months or longer — the additional symptoms can vary from one patient to the next, and they often overlap with those of other conditions.

A study just published in Radiology may help to resolve those ambiguities. Comparing brain images of 15 CFS patients with those from 14 age- and sex-matched healthy volunteers with no history of fatigue or other conditions causing similar symptoms, Zeineh and his colleagues found distinct differences between the brains of patients with CFS and those of healthy people.

The 15 patients were chosen from a group of 200 people with CFS whom Stanford infectious-disease expert Jose Montoya, MD, has been following for several years in an effort to identify the syndrome’s underlying mechanisms and speed the search for treatments. (Montoya is a co-author of the new study.)

In particular, the CFS patients’ brains had less overall white matter (cable-like brain infrastructure devoted to carrying signals rather than processing information), aberrant structure in a portion of a white-matter tract called the right arcuate fasciculus, and thickened gray matter (that’s the data-crunching apparatus of the brain) in the two places where the right arcuate fasciculus originates and terminates.

Exactly what all this means is not clear yet, but it’s unlikely to be spurious. Montoya is excited about the discovery. “In addition to potentially providing the CFS-specific diagnostic biomarker we’ve been desperately seeking for decades, these findings hold the promise of identifying the area or areas of the brain where the disease has hijacked the central nervous system,” he told me.

No, not a cure yet. But a well-aimed ray of light that can guide long-befuddled CFS dart-throwers in their quest to score a bullseye.

Previously: Unbroken: A chronic-fatigue patient’s long road to recovery, Deciphering the puzzle of chronic-fatigue syndrome and Unraveling the mystery of chronic-fatigue syndrome
Photo by Kai Schreiber

Immunology, Infectious Disease, Microbiology, Public Health, Research, Stanford News

Paradox: Antibiotics may increase contagion among Salmonella-infected animals

Paradox: Antibiotics may increase contagion among Salmonella-infected animals

cattleMake no mistake: Antibiotics have worked wonders, increasing human life expectancy as have few other public-health measures (let’s hear it for vaccines, folks). But about 80 percent of all antibiotics used in the United States are given to livestock – chiefly chickens, pigs, and cattle – at low doses, which boosts the animals’ growth rates. A long-raging debate in the public square concerns the possibility that this widespread practice fosters the emergence of antibiotic-resistant bugs.

But a new study led by Stanford bacteriologist Denise Monack, PhD, and just published in Proceedings of the National Academy of Sciences, adds a brand new wrinkle to concerns about the broad administration of antibiotics: the possibility that doing so may, at least  sometimes, actually encourage the spread of disease.

Take salmonella, for example. One strain of this bacterial pathogen, S. typhimurium, is responsible for an estimated 1 million cases of food poisoning, 19,000 hospitalizations and nearly 400 deaths annually in the United States. Upon invading the gut, S. typhimurium produces a potent inflammation-inducing endotoxin known as LPS.

Like its sister strain S. typhi (which  causes close to 200,00o typhoid-fever deaths worldwide per year), S. typhimurium doesn’t mete out its menace equally. While most get very sick, it is the symptom-free few who, by virtue of shedding much higher levels of disease-causing bacteria in their feces, account for the great majority of transmission. (One asymptomatic carrier was the infamous Typhoid Mary, a domestic cook who, early in the 20th century, cheerfully if unknowingly spread her typhoid infection to about 50 others before being forcibly, and tragically, quarantined for much of the rest of her life.)

You might think giving antibiotics to livestock, whence many of our S. typhi-induced food-poisoning outbreaks derive, would kill off the bad bug and stop its spread from farm animals to those of us (including me) who eat them. But maybe not.

From our release on the study:

When the scientists gave oral antibiotics to mice infected with Salmonella typhimurium, a bacterial cause of food poisoning, a small minority — so called “superspreaders” that had been shedding high numbers of salmonella in their feces for weeks — remained healthy; they were unaffected by either the disease or the antibiotic. The rest of the mice got sicker instead of better and, oddly, started shedding like superspreaders. The findings … pose ominous questions about the widespread, routine use of sub-therapeutic doses of antibiotics in livestock.

So, the superspreaders kept on spreading without missing a step, and the others became walking-dead pseudosuperspreaders. A lose-lose scenario all the way around.

“If this holds true for livestock as well – and I think it will – it would have obvious public health implications,” Monack told me. “We need to think about the possibility that we’re not only selecting for antibiotic-resistant microbes, but also impairing the health of our livestock and increasing the spread of contagious pathogens among them and us.”

Previously: Did microbes mess with Typhoid Mary’s macrophages?, Joyride: Brief post-antibiotic sugar spike gives pathogens a lift and What if gut-bacteria communities “remember” past antibiotic exposures?
Photo by Jean-Pierre

Big data, Bioengineering, NIH, Research, Science Policy, Stanford News

$23 million in NIH grants to Stanford for two new big-data-crunching biomedical centers

$23 million in NIH grants to Stanford for two new big-data-crunching biomedical centers

More than $23 million in grants from the National Institutes of Health – courtesy of the NIH’s Big Data to Knowledge (BD2K) initiative – have launched two Stanford-housed centers of excellence bent on enhancing scientists’ capacity to compare, contrast and combine study results in order to draw more accurate conclusions, develop superior medical therapies and understand human behaviors.

Huge volumes of biomedical data – some of it from carefully controlled laboratory studies, increasing amounts of it in the form of electronic health records, and a building torrent of data from wearable sensors – languish in isolated locations and, even when researchers can get their hands on them, are about as comparable as oranges and orangutans. These gigantic banks of data, all too often, go unused or at least underused.

But maybe not for long. “The proliferation of devices monitoring human activity, including mobile phones and an ever-growing array of wearable sensors, is generating unprecedented quantities of data describing human movement, behaviors and health,” says movement-disorders expert Scott Delp, PhD, director of the new National Center for Mobility Data Integration to Insight, also known as the Mobilize Center. “With the insights gained from subjecting these massive amounts of data to  state-of-the-art analytical techniques, we hope to enhance mobility across a broad segment of the population,” Delp told me.

Directing the second grant recipient, the Center for Expanded Data and Retrieval (or CEDAR), is Stanford’s Mark Musen, MD, PhD, a world-class biomedical-computation authority. As I wrote in an online story:

[CEDAR] will address the need to standardize descriptions of diverse biomedical laboratory studies and create metadata templates for detailing the content and context of those studies. Metadata consists of descriptions of how, when and by whom a particular set of data was collected; what the study was about; how the data are formatted; and what previous or subsequent studies along similar lines have been undertaken.

The ultimate goal is to concoct a way to translate the banter of oranges and orangutans, artichokes and aardvarks now residing in a global zoo (or is it a garden?) of diverse databases into one big happy family speaking the same universal language, for the benefit of all.

Previously: NIH associate director for data science on the importance of “data to the biomedicine enterprise”, Miniature wireless device aids pain studies and Stanford bioengineers aim to better understand, treat movement disorders

Big data, Chronic Disease, Immunology, Research, Stanford News

Out of hiding: Found lurking in public databases, type-2 diabetes drug passes early test

Out of hiding: Found lurking in public databases, type-2 diabetes drug passes early test

lurking 3Way too often, promising-looking basic-research findings – intriguing drug candidates, for example – go swooshing down the memory hole, and you never hear anything about them again. So it’s nice when you see researchers following up on an upbeat early finding with work that moves a potential drug to the next peg in the development process. All the more so when the drug candidate targets a massively prevalent disorder.

Type 2 diabetes affects more than 370 million people worldwide, a mighty big number and a mighty big market for drug companies. (Unlike the much less common type 1-diabetes, where the body’s production of the hormone insulin falters and sugar builds up in the blood instead of being taken up by cells throughout the body, in type-2 diabetes insulin production may be fine but tissues become resistant to insulin.) But while numerous medications are available, none of them decisively halt progression, much less reverse the disease’s course.

About two-and-a-half years ago, Stanford data-mining maven Atul Butte, MD, PhD, combed huge publicly available databases, pooled results from numerous studies and, using big-data statistical methods, fished out a gene that had every possibility of being an important player in type 2 diabetes, but had been totally overlooked. (For more info, see this news release.) Called CD44,  this gene is especially active in fat tissue of insulin-resistant people and, Butte’s study showed, had a strong statistical connection to type-2 diabetes.

Butte’s study suggested that CD44′s link to type-2 diabetes was not just statistical but causal: In other words, manipulating the protein CD44 codes for might influence the course of the disease. By chance, that protein has already been much studied by immunologists for totally unrelated reasons. The serendipitous result is that a monoclonal antibody that binds to the protein and inhibits its action was already available.

So, Butte and his colleagues used that antibody in tests they performed on lab mice bioengineered to be extremely susceptible to type-2 diabetes, or what passes for it in a mouse. And, it turns out, the CD44-impairing antibody performed comparably to or better than two workhorse diabetes medications (metformin and pioglitazone) in countering several features of type 2 diabetes, including fatty liver, high blood sugar, weight gain and insulin resistance. The results appear in a study published today in the journal Diabetes.

Most exciting of all: In targeting CD44, the monoclonal antibody was working quite differently from any of the established drugs used for type-2 diabetes.

These are still early results, which will have to be replicated and – one hopes – improved on, first in other animal studies and finally in a long stretch of clinical trials before any drug aimed at CD44 can join the pantheon of type-2 diabetes medications. In any case, for a number of reasons the monoclonal antibody Butte’s team pitted against CD44 is far from perfect for clinical purposes. But refining initial “prototypes” is standard operating procedure for drug developers. So here’s hoping a star is born.

Previously: Newly identified type-2 diabetes gene’s odds of being a false finding equal one in 1 followed by 19 zeroes, Nature/nurture study of type-2 diabetes risk unearths carrots as potential risk reducers and Mining medical discoveries from a mountain of ones and zeroes
Photo by Dan-Scape.co.uk

Big data, Research, Science, Stanford News, Technology

Gamers: The new face of scientific research?

Gamers: The new face of scientific research?

gamerMuch has been written about the lack of reproducibility of results claimed by even well-meaning, upright scientists. Notably, a 2005 PLoS paper (by Stanford health-research policy expert John Ioannidis, MD, DSci) with the unforgettable title, “Why Most Published Research Findings Are False”, has been viewed more than a million times.

Who knew that relief could come in the form of hordes of science-naive gamers?

The notion of crowdsourcing difficult scientific problems is no longer breaking news. A few years ago I wrote a story about Stanford biochemist Rhiju Das, PhD, who was using an interactive online videogame called EteRNA he’d co-invented to come up with potential structures for RNA molecules.

RNA is a wiggly wonder. Chemically similar to DNA but infinitely more flexible and mobile, RNA can and does perform all kinds of critical tasks within every living cell. Scientists are discovering more about RNA’s once-undreamed of versatility on a steady basis. RNA may even have been around before DNA was, making it the precursor that gave rise to all life on our planet.

But EteRNA gamers need know nothing about RNA, or even about biology. They just need to be puzzle-solvers willing to learn and follow the rules of the game. Competing players’ suggested structures for a given variety of RNA molecule are actually tested in Das’s laboratory to see whether they, indeed, stably fold into the predicted structures.

More than 150,000 gamers have registered on EteRNA; at any given moment, there are about 40 active players plugging away at a solution. Several broadly similar games devoted to pursuing biological insights through crowdsourcing  are also up and running.

Das and EteRNA’s co-inventor, Adrien Treuille, PhD, (now at Carnegie Mellon University) think the gaming approach to biology offers some distinct – and to many scientists, perhaps unexpected – advantages over the more-traditional scientific method by which scientists solve problems: form a hypothesis, rigorously test it in your lab under controlled conditions, and keep it all to yourself until you at last submit your methods, data and conclusions to a journal for peer review and, if all goes well, publication.

In this “think piece” article in Trends in Biochemical Sciences,  Treuille and Das write:

Despite an elaborate peer review system, issues such as data manipulation, lack of reproducibility, lack of predictive tests, and cherry-picking among numerous unreported data occur frequently and, in some fields, may be pervasive.

There is an inherent hint of bias, the authors note, in the notion of fitting one’s data to a hypothesis: It’s always tempting to report or emphasize only data that fits your hypothesis or, conversely, look at the data you’ve produced and then tailor the “hypothesis” accordingly (thereby presenting a “proof” that may never be independently and rigorously tested experimentally).

Das and Treuille argue that the “open laboratory” nature of online games prevents data manipulation, allows rapid tests of reproducibility, and “requires rigorous adherence to the scientific method: a nontrivial prediction or hypothesis must precede each experiment.”

Das says, “It only recently hit us that EteRNA, despite being a game, is an unusually rigorous way to do science.”

Previously: John Ioaniddis discusses the popularity of his paper examining the reliability of scientific researchHow a community of online gamers is changing basic biomedical researchParamecia PacMan: Researchers create video games using living organisms and Mob science: Video game, EteRNA, lets amateurs advance RNA research
Photo by Radly J Phoenix

Applied Biotechnology, Genetics, In the News, Nutrition, Public Health, Research

“Frankenfoods” just like natural counterparts, health-wise (at least if you’re a farm animal)

"Frankenfoods" just like natural counterparts, health-wise (at least if you're a farm animal)

cow2More than a hundred billion farm animals have voted with their feet (or their hoofs, as the case may be). And the returns are in: Genetically modified meals are causing them zero health problems.

Many a word has been spilled in connection with the scientific investigation of crops variously referred to as “transgenic,” “bioengineered,” “genetically engineered” or “genetically modified.” In every case, what’s being referred to is an otherwise ordinary fruit, vegetable, or fiber source into which genetic material from a foreign species has been inserted for the purpose of making that crop, say, sturdier or  more drought- or herbicide- or pest-resistant.

Derided as “Frankenfoods” by critics, these crops have been accused of everything from being responsible for a very real global uptick in allergic diseases to causing cancer and autoimmune disease. But (flying in the face of the first accusation) allergic disorders are also rising in Europe, where genetically modified, or GM, crops’ usage is far less widespread than in North America. It’s the same story with autoimmune disease. And claims of a link between genetically modified crops and tumor formation have been backed by scant if any evidence; one paper making such a claim  got all the way through peer review and received a fair amount of Internet buzz before it was ignominiously retracted last year.

But a huge natural experiment to test GM crops’ safety has been underway for some time. Globally, between 70 and 90 percent of all GM foods are consumed by domesticated animals grown by farmers and ranchers. More than 95 percent of such animals – close to 10 billion of them – in the United States alone consume feed containing GM  components.

This was, of course, not the case before the advent of commercially available GM feeds in the 1990s. And U.S. law has long required scrupulous record-keeping concerning the health of animals grown for food production. This makes possible a before-and-after comparison.

In a just-published article in the Journal of Animal Science, University of California-Davis scientists performed a massive review of data available on performance and health of animals consuming feed containing GM ingredients and  products derived from them. The researchers conclude that there’s no evidence of GM products exerting negative health effects on livestock. From the study’s abstract:

Numerous experimental studies have consistently revealed that the performance and health of GE-fed animals are comparable with those fed [otherwise identical] non-[GM] crop lines. Data on livestock productivity and health were collated from publicly available sources from 1983, before the introduction of [GM] crops in 1996, and subsequently through 2011, a period with high levels of predominately [GM] animal feed. These field data sets representing over 100 billion animals following the introduction of [GM]crops did not reveal unfavorable or perturbed trends in livestock health and productivity. No study has revealed any differences in the nutritional profile of animal products derived from[GM]-fed animals.

In other words, the 100 billion GM-fed animals didn’t get sick any more frequently, or in different ways. No noticeable difference at all.

Should that surprise us? We humans are, in fact, pretty transgenic ourselves. About 5 percent of our own DNA can be traced to viruses who deposited their  genes in our genomes, leaving them behind as reminders of the viral visitations. I suppose that’s a great case against cannibalism if you fear GM foods. But I can think of other far more valid arguments to be made along those lines.

Previously: Ask Stanford Medicine: Pediatric immunologist answers your questions about food allergy research, Research shows little evidence that organic foods are more nutritional than conventional ones and Stanford study on the health benefits of organic food: What people are saying
Photo by David B. Gleason

Clinical Trials, Immunology, Pain, Research, Stanford News, Surgery, Technology

Discovery may help predict how many days it will take for individual surgery patients to bounce back

Discovery may help predict how many days it will take for individual surgery patients to bounce back

pandaPost-surgery recovery rates, even from identical procedures, vary widely from patient to patient. Some feel better in a week. Others take a month to get back on their feet. And – until now, anyway – nobody has been able to accurately predict how quickly a given surgical patient will start feeling better. Docs don’t know what to tell the patient, and the patient doesn’t know what to tell loved ones or the boss.

Worldwide, hundreds of millions of surgeries are performed every year. Of those, tens of millions are major ones that trigger massive inflammatory reactions in patients’ bodies. As far as your immune system is concerned, there isn’t any difference between a surgical incision and a saber-tooth tiger attack.

In fact, that inflammatory response is a good thing whether the cut came from a surgical scalpel or a tiger’s tooth, because post-wound inflammation is an early component of the healing process. But when that inflammation hangs on for too long, it impedes rather than speeds healing. Timing is everything.

In a study just published in Science Translational Medicine, Stanford researchers under the direction of perioperative specialist Martin Angst, MD, and immunology techno-wizard Garry Nolan, PhD, have identified an “immune signature” common to all 32 patients they monitored before and after those patients had hip-replacement surgery. This may permit reasonable predictions of individual patients’ recovery rates.

In my news release on this study, I wrote:

The Stanford team observed what Angst called “a very well-orchestrated, cell-type- and time-specific pattern of immune response to surgery.” The pattern consisted of a sequence of coordinated rises and falls in numbers of diverse immune-cell types, along with various changes in activity within each cell type.

While this post-surgical signature showed up in every single patient, the magnitude of the various increases and decreases in cell numbers and activity varied from one patient to the next. One particular factor – changes, at one hour versus 24 hours post-surgery, in the activation states of key interacting proteins inside a small set of “first-responder” immune cells – accounted for 40-60 percent of the variation in the timing of these patients’ recovery.

That robust correlation dwarfs those observed in earlier studies of the immune-system/recovery connection – probably because such previous studies have tended to look at, for example, levels of one or another substance or cell type in a blood sample. The new method lets scientists simultaneously score dozens of identifying surface features and goings-on inside cells, one cell at a time.

The Stanford group is now hoping to identify a pre-operation immune signature that predicts the rate of recovery, according to Brice Gaudilierre, MD, PhD, the study’s lead author. That would let physicians and patients know who’d benefit from boosting their immune strength beforehand (there do appear to be some ways to do that), or from pre-surgery interventions such as physical therapy.

This discovery isn’t going to remain relevant only to planned operations. A better understanding, at the cellular and molecular level, of how immune response drives recovery from wounds may also help emergency clinicians tweak a victim’s immune system after an accident or a saber-tooth tiger attack.

Previously: Targeting stimulation of specific brain cells boosts stroke recovery in mice, A closer look at Stanford study on women and pain and New device identifies immune cells at an unprecedented level of detail, inside and out
Photo by yoppy

Stanford Medicine Resources: