Published by
Stanford Medicine

Category

Big data

Big data, Genetics, Precision health, Research

Individuals’ medical histories predicted by non-coding DNA in Stanford study

Individuals' medical histories predicted by non-coding DNA in Stanford study

image.img.320.highAs whole-genome sequencing gains ground, researchers and clinicians are struggling with how best to interpret the results to improve patient care. After all, three billion base pairs are a lot to sift through, even with powerful computers. Now genomicist Gill Bejerano, PhD, and research associate Harendra Guturu, PhD, have published in PLoS Computational Biology the results of a study showing that computer algorithms and tools previously developed in the Bejerano lab (including one I’ve previously written about here called GREAT) can help researchers home in on important regulatory regions and predict which are likely to contribute to disease.

When they tried their technique on five people who agreed to publicly share their genome sequences and medical histories, they found it to be surprisingly prescient. From our release:

Using this approach to study the genomes of the five individuals, Guturu, Bejerano and their colleagues found that one of the individuals who had a family history of sudden cardiac death had a surprising accumulation of variants associated with “abnormal cardiac output”; another with hypertension had variants likely to affect genes involved in circulating sodium levels; and another with narcolepsy had variants affecting parasympathetic nervous system development. In all five cases, GREAT reported results that jibed with what was known about that individual’s self-reported medical history, and that were rarely seen in the more than 1,000 other genomes used as controls.

Bejerano and Guturu focused on a subset of regulatory regions that control gene expression. As I explained:

The researchers focused their analyses on a relatively small proportion of each person’s genome — the sequences of regulatory regions that have been faithfully conserved among many species over millions of years of evolution. Proteins called transcription factors bind to regulatory regions to control when, where and how genes are expressed. Some regulatory regions have evolved to generate species-specific differences — for example, mutating in a way that changes the expression of a gene involved in foot anatomy in humans — while other regions have stayed mostly the same for millennia. […]

All of us have some natural variation in our genome, accumulated through botched DNA replication, chemical mutation and simple errors that arise when each cell tries to successfully copy 3 billion nucleotides prior to each cell division. When these errors occur in our sperm or egg cells, they are passed to our children and perhaps grandchildren. These variations, called polymorphisms, are usually, but not always, harmless.

Continue Reading »

Behavioral Science, Big data, Neuroscience, Research, Stanford News

What were you just looking at? Oh, wait, never mind – your brain’s signaling pattern just told me

What were you just looking at? Oh, wait, never mind - your brain's signaling pattern just told me

headI’ve blogged previously (here, here and here) about scientific developments that could be construed, to some degree, as advancing the art of mind-reading.

And now, brain scientists have devised an algorithm that spontaneously decodes human conscious thought at the speed of experience.

Well, let me qualify that a bit: In an experimental study published in PLOS Computational Biology, an algorithm assessing real-time streams of brain-activity data was able to tell with a very high rate of accuracy whether, less than half a second earlier, a person had been looking at an image of a house, an image of a face or neither.

Stanford neurosurgical resident Kai Miller, MD, PhD, along with colleagues at Stanford, the University of Washington and the Wadsworth Institute in Albany, NY, got these results by working with seven volunteer patients who had recurring epileptic seizures. These volunteers’ brain surfaces had already been temporarily (and, let us emphasize, painlessly) exposed, and electrode grids and strips had been placed over various areas of their brain surfaces. This was part of an exacting medical procedure performed so that their cerebral activity could be meticulously monitored in an effort to locate the seizures’ precise points of origin within each patient’s brain.

In the study, the volunteers were shown images (flashed on a monitor stationed near their bedside) of houses, faces or nothing at all. From all those electrodes emanated two separate streams of data – one recording synchronized brain-cell activity, and another recording statistically random brain-cell activity – which the algorithm, designed by the researchers, combined and parsed.

The result: The algorithm could predict whether the subject had been viewing a face, house, or neither at any given millisecond. Specifically, the researchers were able to ascertain whether a “house” or “face” image or no image at all had been presented to an experimental subject roughly 400 milliseconds earlier (that’s the time it takes the brain to process the image), plus or minus 20 milliseconds. The algorithm correctly nailed 96 percent of all images shown in the experiment. Moreover, it made very few lousy guesses: only one in 25 were rotten calls.

“Although this particular experiment involved only a limited set of image types, we hope the technique will someday contribute to the care of patents who’ve suffered neurological imagery,” Miller told me.

Admittedly, that kind of guesswork gets tougher as you add more viewing possibilities – for instance, “tool” or “animal” images. So this is still what scientists call an “early days” finding: We’re not exactly at the point where, come the day after tomorrow, you’re walking down the street, you randomly daydream about a fish for an eighth of a second, and suddenly a giant billboard in front of you starts flashing an ad for smoked salmon.

Not yet.

Previously: Mind-reading in real life: Study shows it can be done (but they’ll have to catch you first), A one-minute mind-reading machine? Brain-scan results distinguish mental states and From phrenology to neuroimaging: New finding bolsters theory about how brain operates
Photo by Kai Miller, Stanford University

Big data, Cancer, Genetics, Precision health, Research, Stanford News, Stem Cells

Stem-cell knowledge may help outcomes for colon-cancer patients, says Stanford study

Stem-cell knowledge may help outcomes for colon-cancer patients, says Stanford study

Pinpointing which colon cancer patients need chemotherapy in addition to surgery can be difficult. Studies have suggested that those with stage-2 disease aren’t likely to benefit from chemotherapy, so doctors may chose to bypass the treatment and its toxic side effects.

Now cancer biologist Michael Clarke, MD, working with former postdoctoral scholars Piero Dalerba, MD, and Debashis Sahoo, PhD, have found a way to identify a small but significant minority of stage-2 patients who differ from their peers: They have a poorer overall prognosis, but they are also more likely than other stage-2 patients to benefit from additional chemotherapy. The research was published today in the New England Journal of Medicine.

This research is one of the first examples of how we can use our growing knowledge of stem cell biology to improve patient outcomes

From our press release:

Clarke and his colleagues have been studying the connection between stem cells and cancer for several years. For this study, Dalerba and Sahoo sought to devise a way to identify colon cancers that were more stem-cell-like, and thus likely to be more aggressive. They looked for a gene that was expressed in more mature cells but not in stem or progenitor cells. They did this by using a novel bioinformatics approach that drew on their knowledge of stem cell biology to identify developmentally regulated genes important in colon tissue maturation.

Because they knew from previous research by Dalerba in the Clarke laboratory that stem and immature colon cells express a protein called ALCAM, Dalerba and Sahoo looked for genes whose protein product was negatively correlated with ALCAM expression. “We reasoned that those proteins would likely be involved in the maturation of colon tissue and might not be found in more aggressive, immature cancers,” Sahoo said.

Finally, to ensure their results would be useful to doctors, the researchers added another criterion: The gene had to make a protein that was easily detectable by an existing, clinical-grade test.

Continue Reading »

Big data, Cardiovascular Medicine, Health and Fitness, Obesity, Research

High BMI and low fitness linked with higher hypertension risk

High BMI and low fitness linked with higher hypertension risk

USMC-120412-M-UY543-003Unfit adolescents who have a high body mass index are more likely to suffer from hypertension when they are older than their peers, according to a new study from researchers at Stanford and Lund University in Sweden.

The paper, the first to discover this connection, was published today in JAMA Internal Medicine.

Lead author Casey Crump, MD, PhD, who recently left Stanford to join the Mount Sinai School of Medicine in New York, and his colleagues tapped a unique data source to uncover the relationship: the Swedish military. In the past in Sweden, all males had to join the military at age 18, and Crump and his team examined fitness and health records from more than 1.5 million military conscripts between 1969 and 1997. Thanks to the Swedish national health-care system, they were also able to obtain follow-up information to see when and if adults were diagnosed with hypertension.

I exchanged emails about the study with Crump, who is vice chair for research in the Department of Family Medicine and Community Health; below is our conversation.

Why did you decide to look at this?

Low physical fitness and obesity are very common, modifiable, and have an enormous public health impact.

What is the primary lesson from this work?

We found that both overweight/obesity and low aerobic fitness at age 18 were linked with higher long-term risk of hypertension in adulthood. Importantly, low aerobic fitness was a strong risk factor for hypertension even among those with normal body mass index (BMI). These findings suggest that interventions to prevent hypertension should begin early in life and include not only weight control but also aerobic fitness, even among persons with normal BMI.

Continue Reading »

Big data, Genetics, Precision health, Research

Precision health in action — The hunt for families with a high-cholesterol disorder

Precision health in action — The hunt for families with a high-cholesterol disorder

3440634940_efbb70438d_zIf I don’t know I have a genetic disease, I’m not very likely to seek treatment or change my lifestyle. This lack of knowledge, obviously, leaves me medically vulnerable.

To find people who have one such disease — familial hypercholesterolemia (FH), a condition that causes high levels of LDL cholesterol — biomedical data specialist Nigam Shah, MBBS, PhD, and cardiologist Joshua Knowles, MD, PhD, are applying the powers of big data. Their work has been called a prime example of precision health.

A recent feature by FiveThirtyEight explains their work:

They started by identifying about 120 people known to have FH (true positives) from Stanford’s network of hospitals and doctors’ offices, and some people with high LDL who don’t have the genetic disorder (true negatives). Shah then began to train a computer to spot people with FH by letting it look through those patients’ files and to identify patterns in things like cholesterol levels, age, and the medicine patients were prescribed. The researchers then deployed this algorithm to look for undiagnosed FH within Stanford’s health records.

Using medical billing and lab data, the FH Foundation — Knowles is its chief medical officer — has developed a map to highlight the frequency of FH cases in the United States. Though imprecise, the map is intriguing, showing the condition is clustered on the East Coast, with a few notable exceptions such as a dense patch in eastern Oregon.

These efforts could improve current screening methods and allow affected families to obtain treatment and make life-extending changes in their diet and exercise patterns, the article states.

Previously: Big data used to help identify patients at risk of deadly high-cholesterol disorder, Could patients’ knowledge of their DNA lead to better outcomes? and Push-button personalized treatment guidance for patients not covered by clinical-trial results
Image by x6e38

Big data, Dermatology, Palliative Care, Patient Care, Precision health, Stanford News

Wounds too deep to heal: Study sheds light on which wounds may need special care

Wounds too deep to heal: Study sheds light on which wounds may need special care

WoundKids heal fast; old folks a lot more slowly. We all know that. But what happens when wounds take far longer to heal than is normal? Is it possible to predict which wounds need extra care?

Nigam Shah, MBBS, PhD, a Stanford associate professor of medicine, took another one of his deep dives into patient medical records to find out. The result is a creative proof-of-concept model that can predict which wounds need special care.

Earlier work has shown that even very simple models of wound healing can help caregivers pay attention to the wounds most likely to take 15 weeks or longer to heal, the definition of delayed healing.

For this work, Shah, an expert in biomedical informatics; first author Kenneth Jung, a research scientist at Stanford; and a national team of researchers turned to a dataset consisting of more than 150,000 wounds from more than 53,000 people. The team looked at hundreds of variables from patient records, including, for example the length, breadth, area, and depth of wounds, and how old patients were. The wounds in the study ranged from bed sores or diabetic ulcers to surgical or trauma wounds.

The researchers randomly assigned patients to one of two groups. One group constituted the raw data on which computers could learn which factors predicted slow wound healing in order to create a predictive model. The other group was the test that showed that the model worked on a new, and previously unseen, separate set of data.

They found that the best 100 predictors accounted for 95 percent of the influence on whether wounds were slow to heal. The single most important predictor of poor wound healing was whether a patient was receiving palliative care. Other good predictors of poor wound healing were the patient’s age, the size of the wound and how quickly it began healing in the first week.

The model’s strengths are that it works regardless of the kind of wound and it can be customized for different situations. However, as noted in the paper, the model was developed within the confines of a single company — a chain of specialty wound-care clinics called Healogics — so the model may not necessarily apply to wound healing at other institutions or for patients at home.

The paper, which made the front cover of the journal Wound Repair and Regeneration yesterday, is accessible for an academic paper — so if you’re interested in learning more about using patient records to create predictive health-care models, check it out.

Previously: Stanford researchers investigate source of scarring and To boldly go into a scar-free future: Stanford researchers tackle wound healing
Art — The Incredulity of Saint Thomas by Caravaggio — from Wikimedia

Aging, Big data, Genetics, Research, Science, Stanford News

Genetic links to healthy aging explored by Stanford researchers

Genetic links to healthy aging explored by Stanford researchers

Old man with babyIs the secret to a long life written in your genes? Or will your annual merry-go-rides around the sun be cut short by disease or poor health? The question is intriguing, but difficult to answer. But that hasn’t stopped researchers from looking for genes or biological traits that may explain why some people live to be very old while others sicken and die at relatively young ages.

Today, developmental biologist Stuart Kim, PhD, published some very interesting research in PLoS Genetics about regions of the human genome that appear to be associated with extreme longevity  (think upper 90s to over 100 years old).

One, a gene called APOE, is associated with the development of Alzheimer’s disease. It’s been previously been implicated in longevity. However, the other four regions identified by the study are new. They are involved in biological processes such as cellular senescence or aging, autoimmune disease and signaling among cells.

As explained in the journal’s press release:

Previous work indicated that centenarians have health and diet habits similar to normal people, suggesting that factors in their genetic make-up could contribute to successful aging. However, prior genetic studies had identified only a single gene (APOE, known to be involved in Alzheimer’s disease) that was different in centenarians versus normal agers.

As we’ve explained here before, studying the very old is difficult, in part because there are so few of them. That makes it hard to come up with statistically significant results when comparing them to others. For this study, Kim and his colleagues devised a new technique to identify regions of the genome associated with longevity by linking it to the likelihood of developing other common diseases or disease-related traits, including type 2 diabetes, bone density, blood pressure and coronary artery disease.

Continue Reading »

Big data, Neuroscience, Research, Videos

An 18-month portrait of a brain yields new insights into connectivity — and coffee

An 18-month portrait of a brain yields new insights into connectivity — and coffee

Coffee changes the brain’s activity. Wait, wait, don’t stop reading, I know you know that. But here’s the cool thing: For 18 months, Stanford psychologist Russell Poldrack, PhD, scanned his brain twice a week. On the days he skipped coffee, the MRI images were quite different, showing, for the first time, how caffeine changes brain connectivity.

A Stanford news release explains:

The connection between the somatosensory motor network and the systems responsible for higher vision grew significantly tighter without caffeine.

“That was totally unexpected, but it shows that being caffeinated radically changes the connectivity of your brain,” Poldrack said. “We don’t really know if it’s better or worse, but it’s interesting that these are relatively low-level areas. It may well be that I’m more fatigued on those days, and that drives the brain into this state that’s focused on integrating those basic processes more.”

Poldrack’s experiment could generate hundreds, or even thousands, of similar insights, once researchers parse through the data, which is open to all. The RNA from his white blood cells was also sequenced once a week to coordinate gene expression with brain function.

Poldrack’s brain remained fairly constant and he admits he’s an even-keeled guy, generally content and rarely sad. But he hopes the approach could reveal differences between healthy brains, like his, and those that suffer from schizophrenia or bipolar disorder.

Previously: Hidden memories: A bit of coaching allows subjects to cloack memories from fMRI detector, Image of the Week: Art inspired by MRI brain scans and From phrenology to neuroimaging: New finding bolsters theory about how brain operates

Big data, Cancer, Chronic Disease, Precision health, Research, Stanford News

A dive into patient records uncovers possible connection between cancer treatment, Alzheimer’s

A dive into patient records uncovers possible connection between cancer treatment, Alzheimer's

Shah

When we think of patient medical records, a lot of us think of billing and coding and maybe of health-care providers communicating with one another about how patients are doing. But increasingly medical records are becoming grist for the big-data mill.

According to Nigam Shah, MBBS, PhD, associate professor of biomedical informatics research at Stanford, it’s now possible to artfully extract important biomedical information from pre-existing patient medical records. Such data can be anonymous for the patient, and it’s virtually free for researchers, especially compared to the high cost of lengthy clinical trials that enroll thousands of people.

A just-published study by Shah and his colleagues used patient records to examine a suspected connection between a treatment for prostate cancer and the subsequent risk of developing Alzheimer’s disease. Among a group of about 17,000 prostate cancer patients, those treated with a medication that suppresses testosterone — so-called androgen blockers — had nearly twice the overall rate of later developing Alzheimer’s disease. In absolute numbers, more people are likely helped by the androgen blocking treatment than hurt, but the results are sobering.

The dilemma this finding raises — to take the drug or not — could be solved with a precision-health approach that would clarify which patients should take androgen blockers and which ones should pass. The trick will be to sort the prostate cancer patients who can benefit most from androgen blockers from those whose risk of developing Alzheimer’s is most likely to be increased by the drug.

With any luck, patient medical records can help provide that answer, too.

Previously: Stanford-based Alzheimer’s Disease Research Center to be launchedNew technology enabling men to make more confident decisions about prostate cancer treatment and How efforts to mine electronic health records influence clinical care
Photo of Nigam Shah by Steven Fisch

Big data, Cancer, Genetics, NIH, Precision health, Research, Stanford News

Cancer’s mutational sweet spot identified by Stanford researchers

Cancer's mutational sweet spot identified by Stanford researchers

dots-1002903_1280I’m constantly fascinated by the fact that the cells that make up a cancerous tumor are each undergoing their own private evolution every time they divide. Unlike most normal cells, cancer cells are so wacky that even a small batch can morph into a highly variable mass within a few generations. As I wrote in a story last week:

In many ways, cancer cells represent biology’s wild west. These cells divide rampantly in the absence of normal biological checkpoints, and, as a result, they mutate or even lose genes at much higher rate than normal. As errors accumulate in the genome, things go ever more haywire.

Recently, oncologist Hanlee Ji, MD, the senior associate director of the Stanford Genome Technology Center, and postdoctoral scholar Noemi Andor, PhD, devised a way to measure the extent of these differences among individual cancer cells and to associate their effect with the virulence of the disease as a whole. They published their results today in Nature Medicine.

As Ji, who is also a member of the Stanford Cancer Institute, explained in an email to me:

Until recently the scientific community believed that a typical tumor was composed of malignant cells with very similar genomes. The advent of next-generation sequencing technologies has revealed that this is not the case, and that most tumors are a heterogeneous product of ongoing evolution. This genetic heterogeneity also explains why therapeutic interventions in advanced cancers are often unsuccessful: some cells within a tumor develop resistance to therapies. Understanding the extent of tumor heterogeneity and how it leads to drug resistance is a major challenge in cancer biology research.

Continue Reading »

Stanford Medicine Resources: