Published by
Stanford Medicine

Author

Big data, Cancer, Cardiovascular Medicine, Fertility, Men's Health, Research, Stanford News

Male infertility can be warning of hypertension, Stanford study finds

Male infertility can be warning of hypertension, Stanford study finds

sperm graffitiA study of more than 9,000 men with fertility problems links poor semen quality to a higher chance of having hypertension and other health conditions. The findings suggest that more-comprehensive examinations of men undergoing treatment for infertility would be a smart idea.

About a quarter of the adults in the United States (and in the entire world) have hypertension, or high blood pressure. Although it’s the most important preventable risk factor for premature death worldwide, hypertension often goes undiagnosed.

In a study published today in Fertility and Sterility, Stanford urologist Mike Eisenberg, MD, PhD, and his colleagues analyzed the medical records of 9,387 men, mostly between 30 and 50 years old, who had provided semen samples in the course of being evaluated at Stanford to determine the cause of their infertility. The researchers found a substantial link between poor semen quality and specific diseases of the circulatory system, notably hypertension, vascular disease and heart disease.

“To the best of my knowledge, there’s never been a study showing this association before,” Eisenberg told me when I interviewed him for a press release about the findings. “There are a lot of men who have hypertension, so understanding that correlation is of huge interest to us.”

In the past few years, Eisenberg has used similar big data techniques to discover links between male infertility and cancer and heightened overall mortality, as well as between childlessness and death rates in married heterosexual men.

Eisenberg sums it all up and proposes a way forward in the release:

Infertility is a warning: Problems with reproduction may mean problems with overall health … That visit to a fertility clinic represents a big opportunity to improve their treatment for other conditions, which we now suspect could actually help resolve the infertility they came in for in the first place.

Previously: Poor semen quality linked to heightened mortality rate in men, Men with kids are at lower risk of dying from cardiovascular disease than their childless counterparts and Low sperm count can mean increased cancer risk
Photo by Grace Hebert

Immunology, Neuroscience, Research, Stanford News

Blocking a receptor on brain’s immune cells counters Alzheimer’s in mice

Blocking a receptor on brain’s immune cells counters Alzheimer’s in mice

brain in motionAttention, nerve cells: It’s not all about you.

As a new study in the Journal of Clinical Investigation led by Stanford neuroscientist Kati Andreasson, MD, shows, blocking the action of a single molecule situated on the surfaces of entirely different brain cells reversed memory loss and a bunch of other Alzheimer’s-like features in experimental mice.

The very term “neuroscience” strongly suggests that nerve cells, a.k.a. neurons, are the Big Enchilada in brain research – and, let’s face it, you wouldn’t want to leave home without them. But they’re far from the entire picture. In fact, neurons account for a mere 10 percent of all the cells in the brain. It may be that the mass die-off of nerve cells in the brains of people with Alzheimer’s disease may largely occur because, during the course of aging, another set of key players ensconced in that mysterious organ inside our skull and  known collectively as microglia begin to fall down on the job.

In  a release I wrote to explain the study’s findings in lay terms, I described microglia as the brain’s very own, dedicated immune cells:

A microglial cell serves as a front-line sentry, monitoring its surroundings for suspicious activities and materials by probing its local environment. If it spots trouble, it releases substances that recruit other microglia to the scene … Microglia are tough cops, protecting the brain against invading bacteria and viruses by gobbling them up. They are adept at calming things down, too, clamping down on inflammation if it gets out of hand. They also work as garbage collectors, chewing up dead cells and molecular debris strewn among living cells – including clusters of a protein called A-beta, notorious for aggregating into gummy deposits called Alzheimer’s plaques, the disease’s hallmark anatomical feature. … A-beta, produced throughout the body, is as natural as it is ubiquitous. But when it clumps into soluble clusters consisting of a few molecules, it’s highly toxic to nerve cells. These clusters are believed to play a substantial role in causing Alzheimer’s.

“The microglia are supposed to be, from the get-go, constantly clearing A-beta, as well as keeping a lid on inflammation,” Andreasson told me. If their job performance heads downhill – as seems to occur during the aging process – things get out of control. A-beta builds up in the brain, inducing toxic inflammation.

But by blocking the activity of a single molecule – a receptor protein on microglial cells’ surfaces  – Andreasson’s team got those microglia back on the job. They resumed chewing up A-beta, quashing runaway neuro-inflammation, squirting out neuron-nurturing chemicals. Bottom line: the Alzheimer’s-prone experimental animals’ IQs (as measured by mousey memory tests) rose dramatically.

Aspirin and similar drugs also tend to shut down the activity of this microglial receptor, which may or may not explain why their use seems to stave off the onset of Alzheimer’s in people who start using them regularly (typically for unrelated reasons) before this memory-stealing syndrome’s symptoms show up. But aspirin et al. do lots of other things, too – some good, some bad. The new findings suggest a compound carefully tailored to block this receptor and do nothing else might be a weapon in the anti-Alzheimer’s arsenal.

Previously: Another big step toward building a better aspirin tablet, Untangling the inflammation/Alzheimer’s connection and Study could lead to new class of stroke drugs
Photo by Henry Markham

Aging, In the News, Neuroscience, Research, Science, Stanford News

Stanford research showing young blood recharges the brains of old mice among finalists for Science Magazine’s Breakthrough of the Year

Stanford research showing young blood recharges the brains of old mice among finalists for Science Magazine's Breakthrough of the Year

ballot box

Stanford research showing that an infusion of young blood recharges the brains of old mice is one of the finalists for Science magazine’s annual contest for People’s Choice for Breakthrough of the Year. Today is the last day to cast your vote. Click here if you’d like to support the work, which could lead to new therapeutic approaches for treating dementia.

Several months ago, I had the pleasure of helping break the news about this great piece of research. So, let’s face it, I take a certain amount of pride in the amount of news coverage it received and the attention it’s getting now.

But the real credit goes to Stanford neuroscientist Tony Wyss-Coray, PhD, along with his able lead author Saul Villeda, PhD, and colleagues. This important discovery by Wyss-Coray’s team revealed that infusing young mice’s blood plasma into the bloodstream of old mice makes those old mice jump up and do the Macarena – and perform a whole lot better on mousey IQ tests.

Infusing blood plasma is hardly a new technique. As Wyss-Coray told me when I interviewed him for my release:

“This could have been done 20 years ago….You don’t need to know anything about how the brain works. You just give an old mouse young blood and see if the animal is smarter than before. It’s just that nobody did it.”

And after all, isn’t that what breakthroughs are all about? It’s still too early to say, but this simple treatment – or (more likely) drugs based on a better understanding of what factors in blood are responsible for reversing neurological decline –  could someday turn out to have applications for Alzheimer’s disease and much more.

At last count, the Wyss-Coray’s research is neck-and-neck with a competing project for first place. If you think, as I do, that a discovery with this much potential deserves a vote of confidence make sure to take a moment this afternoon to cast your virtual ballot.

Previously: The rechargeable brain: Blood plasma from young mice improves old mice’s memory and learning, Old blood makes young brains act older, and vice versa and Can we reset the aging clock, once cell at a time?
Photo by FutUndBeidl

Genetics, History, Immunology, Research, Science, Stanford News

Knight in lab: In days of yore, postdoc armed with quaint research tools found immunology's Holy Grail

Knight in lab: In days of yore, postdoc armed with quaint research tools found immunology's Holy Grail

charging knightA human has only about 25,000 genes. So, it’s tough to imagine just how our immune systems can manage to recognize potentially billions of differently shaped microbial or tumor-cell body parts. But that’s precisely what our immune systems have to do, and with exquisite precision, in order to stomp invading pathogens and wanna-be cancer cells and leave the rest of our bodies the heck alone.

How do they do it?

Stanford immunologist Mark Davis, PhD, tore the cover off of immunology in the early 1980s by solving that riddle. As I wrote in  “The Swashbuckler,” an article in the latest issue of Stanford Medicine, T cells are one of two closely related, closely coordinated workhorse-warrior cell types that deserve much of the credit for the vertebrate immune system’s knack of carefully picking bad guys of various stripes out of the lineup and attacking them:

[Q]uite similar in many respects, B cells and T cells are more like fraternal than identical twins. B cells are specialized to find strange cells and strange substances circulating in the blood and lymph. T cells are geared toward inspecting our own cells for signs of harboring a virus or becoming cancerous. So it’s not surprising that the two cell types differ fundamentally in the ways they recognize their respective targets. B cells’ antibodies recognize the three-dimensional surfaces of molecules. T cells recognize one-dimensional sequences of protein snippets, called peptides, on cell surfaces. All proteins in use in a cell eventually get broken down into peptides, which are transported to the cell surface and displayed in molecular jewel cases that evolution has optimized for efficient inspection by patrolling T cells. Somehow, our inventory of B cells generates antibodies capable of recognizing and binding to a seemingly infinite number of differently shaped biological objects. Likewise, our bodies’ T-cell populations can recognize and respond to a vast range of different peptide sequences.

In the late 1970s, scientists (including then-graduate student Davis, who is now director of Stanford’s Institute for Immunity, Transplantation and Infection) unraveled the genetic quirks behind B cells’ ability to recognize a mind-blowingly diverse  set of different pathogens’ and tumor-cells’ characteristic molecular shapes. As a follow-on, Davis and a handful of colleagues – working with what would today be considered the most primitive of molecular-biology tools – isolated the gene underlying the T-cell receptor: an idiosyncratic and very important surface protein that is overwhelmingly responsible for T cells’ recognition of myriad pathogen- and cancer-cell-specific peptide sequences. And they figured out how it works.

The result? (Again from my article:)

With the T-cell receptor gene in hand, scientists can now routinely sort, scrutinize, categorize and utilize T cells to learn about the immune system and work toward improving human health. Without it, they’d be in the position of a person trying to recognize words by the shapes of their constituent letters instead of by phonetics.

Previously: Stanford Medicine magazine traverses the immune systemBest thing since sliced bread? A (potential) new diagnostic for celiac disease, Deja vu: Adults’ immune systems “remember” microscopic monsters they’ve never seen before, Immunology escapes from the mousetrap, Immunology meets infotech and Mice to men: Immunological research vaults into the 21st century
Photo by davidmclaughlin

Aging, Imaging, Ophthalmology, Patient Care, Research, Stanford News

New way to predict advance of age-related macular degeneration

New way to predict advance of age-related macular degeneration

eyeballAge-related macular degeneration, in which the macula – the key area of the retina responsible for vision – begins to degenerate, is the leading cause of blindness and central vision loss among adults older than 65. Some 10-15 million Americans suffer from the disease.

If those numbers don’t scare you, try these: “It affects 14%-24% of the U.S. population aged 65-74 years and 35 -40% of people aged 74 years or more have the disease.” Yow!

Most cases of AMD don’t lead to blindness. But if the disorder progresses to an advanced stage where abnormal blood vessels accumulate underneath the macula and leak blood and fluid, irreversible damage to the macula can quickly ensue if treatment doesn’t arrive right on time.

Timing that treatment just right is a real issue. As I wrote in my recent release about a promising development in this field:

[U]ntil now, there has been no effective way to tell which individuals with AMD are likely to progress to the wet stage. Current treatments are costly and invasive – they typically involve injections of medicines directly into the eyeball – making the notion of treating people with early or intermediate stages of AMD a non-starter. Doctors and patients have to hope the next office visit will be early enough to catch wet AMD at its onset, before it takes too great a toll.

Here’s the good news: A team led by Stanford radiologist and biomedical informatician Daniel Rubin, MD, has found a new way to forecast which patients with age-related macular degeneration are likely to progress to the most debilitating form of the disease – and when.

The advance, chronicled in a study in Investigative Ophthalmology & Visual Science, is a formula – derived from extensive computer analysis of thousands of retinal scans of hundreds of patients’ eyes – that recommends, on a personalized basis,  when to schedule an individual patient’s next office visit in order to optimize the prospect of catching AMD progression before it causes blindness.

The formula predicts, with high accuracy, whether and when a patient with mild or intermediate AMD will progress to the dangerous advanced stage. And it does so simply by crunching imaging data that is already commonly collected in eye doctors’ offices anyway.

“Our technique involves no new procedures in the doctor’s office – patients get the same care they’ve been getting anyway,” Rubin told me. His team just tacked on a sophisticated, computerized image-processing step.

Previously: Treating common forms of blindness using tissue generated with ink-jet printing technology, To maintain good eyesight, make healthy vision a priority and Stanford researchers develop web-based tool to streamline interpretation of medical images
Image courtesy of Daniel Rubin

Aging, Chronic Disease, Clinical Trials, Immunology, Research, Stanford News

Is osteoarthritis an inflammatory disorder? New thinking gets clinical test

Is osteoarthritis an inflammatory disorder? New thinking gets clinical test

SM arthritis imageOsteoarthritis sort of comes with the territory of aging. If you live long enough, you’ll probably get it.

For those fortunate enough not to have a working acquaintance with the disease, I describe its onset in a just-published Stanford Medicine article, “When Bones Collide”:

You start to feel some combination of pain, stiffness and tenderness in a thumb, a knee, a hip, a toe or perhaps your back or neck. It takes root, settles in and, probably, gets worse. And once you’ve got it, it never goes away. Eventually, it can get tough to twist off a bottle cap or to get around, depending on the joint or joints affected.

All too many of us, of course, are perfectly familiar with the symptoms of osteoarthritis. An estimated 27 million people in the United States have been diagnosed with it. By 2030, due mainly to the aging of the population, the number will be more like 50 million. Anything so common is all too easy to look at as inevitable: basically, the result of the same kind of wear and tear on your joints that causes the treads on a commuter car’s set of tires to disappear eventually.

But Stanford rheumatologists Bill Robinson, MD, PhD, and Mark Genovese, MD, think that just may not be the way it works. Almost four years ago I wrote about Robinson’s discovery that osteoarthritis is propelled by a sequence of inflammatory events similar to ones associated with Alzheimer’s disease, cardiovascular disease, and type-2 diabetes. That discovery and a steady stream of follow-up work in his lab have spawned a clinical trial, now underway and led by Genovese, to see if a regimen of anti-inflammatory medicines that’s been shown to roll back osteoarthritis’s progression in mice can do the same thing in people.

That’s the kind of progress most of us could live without.

Previously: New thinking about osteoarthritis, older people’s nemesis and Inflammation, not just wear and tear, spawn arthritis
Illustration by Jeffrey Decoster

Imaging, Immunology, Infectious Disease, Neuroscience, Research, Stanford News

Some headway on chronic fatigue syndrome: Brain abnormalities pinpointed

Some headway on chronic fatigue syndrome: Brain abnormalities pinpointed

patchbrainHow can you treat a disease when you don’t know what causes it? Such a mystery disease is chronic fatigue syndrome, which not so long ago was written off by many physicians as a psychiatric phenomenon because they just couldn’t figure out what else might be behind it. No one was even able to identify an anatomical or physiological “signature” of the disorder that could distinguish it from any number of medical lookalikes.

“If you don’t understand the disease, you’re throwing darts blindfolded,” Stanford neuroradiologist Mike Zeineh, MD, PhD, told me about a week ago. Zeineh is working to rip that blindfold from CFS researchers’ eyes.

From a release I wrote about some breaking CFS research by Zeineh and his colleagues:

CFS affects between 1 million and 4 million individuals in the United States and millions more worldwide. Coming up with a more precise number of cases is tough because it’s difficult to actually diagnose the disease. While all CFS patients share a common symptom — crushing, unremitting fatigue that persists for six months or longer — the additional symptoms can vary from one patient to the next, and they often overlap with those of other conditions.

A study just published in Radiology may help to resolve those ambiguities. Comparing brain images of 15 CFS patients with those from 14 age- and sex-matched healthy volunteers with no history of fatigue or other conditions causing similar symptoms, Zeineh and his colleagues found distinct differences between the brains of patients with CFS and those of healthy people.

The 15 patients were chosen from a group of 200 people with CFS whom Stanford infectious-disease expert Jose Montoya, MD, has been following for several years in an effort to identify the syndrome’s underlying mechanisms and speed the search for treatments. (Montoya is a co-author of the new study.)

In particular, the CFS patients’ brains had less overall white matter (cable-like brain infrastructure devoted to carrying signals rather than processing information), aberrant structure in a portion of a white-matter tract called the right arcuate fasciculus, and thickened gray matter (that’s the data-crunching apparatus of the brain) in the two places where the right arcuate fasciculus originates and terminates.

Exactly what all this means is not clear yet, but it’s unlikely to be spurious. Montoya is excited about the discovery. “In addition to potentially providing the CFS-specific diagnostic biomarker we’ve been desperately seeking for decades, these findings hold the promise of identifying the area or areas of the brain where the disease has hijacked the central nervous system,” he told me.

No, not a cure yet. But a well-aimed ray of light that can guide long-befuddled CFS dart-throwers in their quest to score a bullseye.

Previously: Unbroken: A chronic-fatigue patient’s long road to recovery, Deciphering the puzzle of chronic-fatigue syndrome and Unraveling the mystery of chronic-fatigue syndrome
Photo by Kai Schreiber

Immunology, Infectious Disease, Microbiology, Public Health, Research, Stanford News

Paradox: Antibiotics may increase contagion among Salmonella-infected animals

Paradox: Antibiotics may increase contagion among Salmonella-infected animals

cattleMake no mistake: Antibiotics have worked wonders, increasing human life expectancy as have few other public-health measures (let’s hear it for vaccines, folks). But about 80 percent of all antibiotics used in the United States are given to livestock – chiefly chickens, pigs, and cattle – at low doses, which boosts the animals’ growth rates. A long-raging debate in the public square concerns the possibility that this widespread practice fosters the emergence of antibiotic-resistant bugs.

But a new study led by Stanford bacteriologist Denise Monack, PhD, and just published in Proceedings of the National Academy of Sciences, adds a brand new wrinkle to concerns about the broad administration of antibiotics: the possibility that doing so may, at least  sometimes, actually encourage the spread of disease.

Take salmonella, for example. One strain of this bacterial pathogen, S. typhimurium, is responsible for an estimated 1 million cases of food poisoning, 19,000 hospitalizations and nearly 400 deaths annually in the United States. Upon invading the gut, S. typhimurium produces a potent inflammation-inducing endotoxin known as LPS.

Like its sister strain S. typhi (which  causes close to 200,00o typhoid-fever deaths worldwide per year), S. typhimurium doesn’t mete out its menace equally. While most get very sick, it is the symptom-free few who, by virtue of shedding much higher levels of disease-causing bacteria in their feces, account for the great majority of transmission. (One asymptomatic carrier was the infamous Typhoid Mary, a domestic cook who, early in the 20th century, cheerfully if unknowingly spread her typhoid infection to about 50 others before being forcibly, and tragically, quarantined for much of the rest of her life.)

You might think giving antibiotics to livestock, whence many of our S. typhi-induced food-poisoning outbreaks derive, would kill off the bad bug and stop its spread from farm animals to those of us (including me) who eat them. But maybe not.

From our release on the study:

When the scientists gave oral antibiotics to mice infected with Salmonella typhimurium, a bacterial cause of food poisoning, a small minority — so called “superspreaders” that had been shedding high numbers of salmonella in their feces for weeks — remained healthy; they were unaffected by either the disease or the antibiotic. The rest of the mice got sicker instead of better and, oddly, started shedding like superspreaders. The findings … pose ominous questions about the widespread, routine use of sub-therapeutic doses of antibiotics in livestock.

So, the superspreaders kept on spreading without missing a step, and the others became walking-dead pseudosuperspreaders. A lose-lose scenario all the way around.

“If this holds true for livestock as well – and I think it will – it would have obvious public health implications,” Monack told me. “We need to think about the possibility that we’re not only selecting for antibiotic-resistant microbes, but also impairing the health of our livestock and increasing the spread of contagious pathogens among them and us.”

Previously: Did microbes mess with Typhoid Mary’s macrophages?, Joyride: Brief post-antibiotic sugar spike gives pathogens a lift and What if gut-bacteria communities “remember” past antibiotic exposures?
Photo by Jean-Pierre

Big data, Bioengineering, NIH, Research, Science Policy, Stanford News

$23 million in NIH grants to Stanford for two new big-data-crunching biomedical centers

$23 million in NIH grants to Stanford for two new big-data-crunching biomedical centers

More than $23 million in grants from the National Institutes of Health – courtesy of the NIH’s Big Data to Knowledge (BD2K) initiative – have launched two Stanford-housed centers of excellence bent on enhancing scientists’ capacity to compare, contrast and combine study results in order to draw more accurate conclusions, develop superior medical therapies and understand human behaviors.

Huge volumes of biomedical data – some of it from carefully controlled laboratory studies, increasing amounts of it in the form of electronic health records, and a building torrent of data from wearable sensors – languish in isolated locations and, even when researchers can get their hands on them, are about as comparable as oranges and orangutans. These gigantic banks of data, all too often, go unused or at least underused.

But maybe not for long. “The proliferation of devices monitoring human activity, including mobile phones and an ever-growing array of wearable sensors, is generating unprecedented quantities of data describing human movement, behaviors and health,” says movement-disorders expert Scott Delp, PhD, director of the new National Center for Mobility Data Integration to Insight, also known as the Mobilize Center. “With the insights gained from subjecting these massive amounts of data to  state-of-the-art analytical techniques, we hope to enhance mobility across a broad segment of the population,” Delp told me.

Directing the second grant recipient, the Center for Expanded Data and Retrieval (or CEDAR), is Stanford’s Mark Musen, MD, PhD, a world-class biomedical-computation authority. As I wrote in an online story:

[CEDAR] will address the need to standardize descriptions of diverse biomedical laboratory studies and create metadata templates for detailing the content and context of those studies. Metadata consists of descriptions of how, when and by whom a particular set of data was collected; what the study was about; how the data are formatted; and what previous or subsequent studies along similar lines have been undertaken.

The ultimate goal is to concoct a way to translate the banter of oranges and orangutans, artichokes and aardvarks now residing in a global zoo (or is it a garden?) of diverse databases into one big happy family speaking the same universal language, for the benefit of all.

Previously: NIH associate director for data science on the importance of “data to the biomedicine enterprise”, Miniature wireless device aids pain studies and Stanford bioengineers aim to better understand, treat movement disorders

Big data, Chronic Disease, Immunology, Research, Stanford News

Out of hiding: Found lurking in public databases, type-2 diabetes drug passes early test

Out of hiding: Found lurking in public databases, type-2 diabetes drug passes early test

lurking 3Way too often, promising-looking basic-research findings – intriguing drug candidates, for example – go swooshing down the memory hole, and you never hear anything about them again. So it’s nice when you see researchers following up on an upbeat early finding with work that moves a potential drug to the next peg in the development process. All the more so when the drug candidate targets a massively prevalent disorder.

Type 2 diabetes affects more than 370 million people worldwide, a mighty big number and a mighty big market for drug companies. (Unlike the much less common type 1-diabetes, where the body’s production of the hormone insulin falters and sugar builds up in the blood instead of being taken up by cells throughout the body, in type-2 diabetes insulin production may be fine but tissues become resistant to insulin.) But while numerous medications are available, none of them decisively halt progression, much less reverse the disease’s course.

About two-and-a-half years ago, Stanford data-mining maven Atul Butte, MD, PhD, combed huge publicly available databases, pooled results from numerous studies and, using big-data statistical methods, fished out a gene that had every possibility of being an important player in type 2 diabetes, but had been totally overlooked. (For more info, see this news release.) Called CD44,  this gene is especially active in fat tissue of insulin-resistant people and, Butte’s study showed, had a strong statistical connection to type-2 diabetes.

Butte’s study suggested that CD44’s link to type-2 diabetes was not just statistical but causal: In other words, manipulating the protein CD44 codes for might influence the course of the disease. By chance, that protein has already been much studied by immunologists for totally unrelated reasons. The serendipitous result is that a monoclonal antibody that binds to the protein and inhibits its action was already available.

So, Butte and his colleagues used that antibody in tests they performed on lab mice bioengineered to be extremely susceptible to type-2 diabetes, or what passes for it in a mouse. And, it turns out, the CD44-impairing antibody performed comparably to or better than two workhorse diabetes medications (metformin and pioglitazone) in countering several features of type 2 diabetes, including fatty liver, high blood sugar, weight gain and insulin resistance. The results appear in a study published today in the journal Diabetes.

Most exciting of all: In targeting CD44, the monoclonal antibody was working quite differently from any of the established drugs used for type-2 diabetes.

These are still early results, which will have to be replicated and – one hopes – improved on, first in other animal studies and finally in a long stretch of clinical trials before any drug aimed at CD44 can join the pantheon of type-2 diabetes medications. In any case, for a number of reasons the monoclonal antibody Butte’s team pitted against CD44 is far from perfect for clinical purposes. But refining initial “prototypes” is standard operating procedure for drug developers. So here’s hoping a star is born.

Previously: Newly identified type-2 diabetes gene’s odds of being a false finding equal one in 1 followed by 19 zeroes, Nature/nurture study of type-2 diabetes risk unearths carrots as potential risk reducers and Mining medical discoveries from a mountain of ones and zeroes
Photo by Dan-Scape.co.uk

Stanford Medicine Resources: