Published by
Stanford Medicine

Category

Big data

Big data, Cardiovascular Medicine, Chronic Disease, Research, Science, Stanford News, Videos

Big data approach identifies new stent drug that could help prevent heart attacks

Big data approach identifies new stent drug that could help prevent heart attacks

Ziad Ali, MD, PhD, was a cardiovascular fellow at Stanford with a rather unique skill when a 6-year study published today online in The Journal of Clinical Investigation first began.

The multi-talented physician-scientist – who is now associate director of translational medicine at Columbia University Medical Center – had figured out a way to put tiny little stents into mice with clogged arteries as a PhD student.

The skill would become key as he and colleagues set out to find a better pharmaceutical for the drug-eluting stents that are used in combination with angioplasty to treat coronary artery disease. In order to prevent stent disease, the often serious medical problem caused by stents themselves, chemotherapy drugs were added to bare metal stents. But these drug-eluting stents have their own problems: The drugs work like “hitting a pin with a sledgehemmer,” as Ali describes it, often damaging the lining of the arteries which can lead to heart attacks. As a result, patients are required to take blood thinners for up to a year after the procedure to prevent clots.

“A lot of our patient population is on the elderly side with bad hips or diabetes,” Ali told me. “Once you get a drug-coated stent, you can’t have surgery for a year. And if you stop the blood thinners for any reason, you’re at risk of a stent clotting off. And that actually causes a heart attack. Stent thrombosis has a high mortality rate.”

By using a “big data” computational approach, learning about the genetic pathways involved in coronary artery disease, then testing the new theories on mice models in the lab, researchers were able to pinpoint a potential new treatment for patients: Crizotinib, a pharmaceutical approved by the FDA for treatment in certain cases of lung cancer.

“This could have major clinical impact,” Euan Ashley, MD, PhD, senior author of the study, who discusses the work alongside Ali in the video above, said.

Previously: Euan Ashley discusses harnessing big data to drive innovation for a healthier world, New computing center at Stanford supports big data, Trial results promising for new anti-clotting drug and A call to use the “tsunami of biomedical data” to preserve life and enhance health
Photo in featured entry box by Mark Tuschman

Big data, Research

Using supercomputers to spot drug reactions

Using supercomputers to spot drug reactions

sierraSupercomputer[1]Remember the drugs Avandia and Vioxx? Avandia, an anti-diabetic drug released in 1999, worked wonderfully against diabetes. But it was also shown to increase users’ risk of heart attacks – a devastating side effect that slashed its sales. And Vioxx, an anti-inflammatory drug, was also linked to an increased risk of heart attacks and stroke, leading manufacturer Merck & Co. to withdraw it from the market.

These are just the drugs that grabbed the headlines. Other side effects from drugs kill more than 100,000 patients a year, according to a study in the Journal of the American Medical Association.

To slash that number, researchers at Lawrence Livermore National Laboratory put their supercomputers to work. They developed a program that determines whether a drug will form a bond with any of hundreds of proteins found in the human body. The research, published recently in the journal PLOS One, found that modeling based on the protein’s 3-D structure can pinpoint reactions more quickly than current methods.

From the LLNL release:

“We have discovered a very viable way to find off-target proteins that are important for side effects,” said Monte LaBute, PhD,  a LLNL researcher and the paper’s lead author. “This approach using high-performance computers and molecular docking to find adverse drug reactions never really existed before.”

The team’s findings provide drug companies with a cost-effective and reliable method to screen for side effects, according to LaBute. Their goal is to expand their computational pharmaceutical research to include more off-target proteins for testing and eventually screen every protein in the body.

“If we can do that, the drugs of tomorrow will have less side effects that can potentially lead to fatalities,” Labute said. “Optimistically, we could be a decade away from our ultimate goal. However, we need help from pharmaceutical companies, health care providers and the FDA to provide us with patient and therapeutic data.”

Previously: Mining data from patients’ charts to identify harmful drug reactions, Medical journal wins award for reporting on problems with Medtronic bone product and New research scrutinizes off-label drug use
Photo by Lawrence Livermore National Laboratory

Big data, Biomed Bites, Genetics, Research

Making sense out of genetic gobbledygook with a Stanford biostatistician

Making sense out of genetic gobbledygook with a Stanford biostatistician

Here’s this week’s Biomed Bites, a weekly feature that highlights some of Stanford’s most innovative research and introduces readers to groundbreaking researchers in a variety of disciplines.

Imagine sequencing the genome of just one person. Translated into the letters that represent nucleotide subunits — A, G, T & C — it would take three billion letters to represent just one genome. AGTCCCCGTAGTTTCGAACTGAGGATCCCC….. Senseless, useless and messy. Now look at several hundred genomes — or try to find something specific within the “noise.”

That’s where genomic statisticians like Chiara Sabatti, PhD, come in handy. Sabatti smooshes this genetic gobbledygook into elegant formulas, emerging with important insights into the genome and particular diseases such as Alzheimer’s disease.

Growing up in Italy, Sabatti thought she might want to be a doctor. But she couldn’t part with her true love: numbers. As a graduate student at Stanford, she was delighted to discover statistical genetics. And after a stint at the University of California, Los Angeles, she’s back. For good, we hope.

Learn more about Stanford Medicine’s Biomedical Innovation Initiative and about other faculty leaders who are driving forward biomedical innovation here.

Previously: Stanford statistician Chiara Sabatti on teaching students to “ride the big data wave”

Big data, Bioengineering, NIH, Research, Science Policy, Stanford News

$23 million in NIH grants to Stanford for two new big-data-crunching biomedical centers

$23 million in NIH grants to Stanford for two new big-data-crunching biomedical centers

More than $23 million in grants from the National Institutes of Health – courtesy of the NIH’s Big Data to Knowledge (BD2K) initiative – have launched two Stanford-housed centers of excellence bent on enhancing scientists’ capacity to compare, contrast and combine study results in order to draw more accurate conclusions, develop superior medical therapies and understand human behaviors.

Huge volumes of biomedical data – some of it from carefully controlled laboratory studies, increasing amounts of it in the form of electronic health records, and a building torrent of data from wearable sensors – languish in isolated locations and, even when researchers can get their hands on them, are about as comparable as oranges and orangutans. These gigantic banks of data, all too often, go unused or at least underused.

But maybe not for long. “The proliferation of devices monitoring human activity, including mobile phones and an ever-growing array of wearable sensors, is generating unprecedented quantities of data describing human movement, behaviors and health,” says movement-disorders expert Scott Delp, PhD, director of the new National Center for Mobility Data Integration to Insight, also known as the Mobilize Center. “With the insights gained from subjecting these massive amounts of data to  state-of-the-art analytical techniques, we hope to enhance mobility across a broad segment of the population,” Delp told me.

Directing the second grant recipient, the Center for Expanded Data and Retrieval (or CEDAR), is Stanford’s Mark Musen, MD, PhD, a world-class biomedical-computation authority. As I wrote in an online story:

[CEDAR] will address the need to standardize descriptions of diverse biomedical laboratory studies and create metadata templates for detailing the content and context of those studies. Metadata consists of descriptions of how, when and by whom a particular set of data was collected; what the study was about; how the data are formatted; and what previous or subsequent studies along similar lines have been undertaken.

The ultimate goal is to concoct a way to translate the banter of oranges and orangutans, artichokes and aardvarks now residing in a global zoo (or is it a garden?) of diverse databases into one big happy family speaking the same universal language, for the benefit of all.

Previously: NIH associate director for data science on the importance of “data to the biomedicine enterprise”, Miniature wireless device aids pain studies and Stanford bioengineers aim to better understand, treat movement disorders

Big data, Chronic Disease, Immunology, Research, Stanford News

Out of hiding: Found lurking in public databases, type-2 diabetes drug passes early test

Out of hiding: Found lurking in public databases, type-2 diabetes drug passes early test

lurking 3Way too often, promising-looking basic-research findings – intriguing drug candidates, for example – go swooshing down the memory hole, and you never hear anything about them again. So it’s nice when you see researchers following up on an upbeat early finding with work that moves a potential drug to the next peg in the development process. All the more so when the drug candidate targets a massively prevalent disorder.

Type 2 diabetes affects more than 370 million people worldwide, a mighty big number and a mighty big market for drug companies. (Unlike the much less common type 1-diabetes, where the body’s production of the hormone insulin falters and sugar builds up in the blood instead of being taken up by cells throughout the body, in type-2 diabetes insulin production may be fine but tissues become resistant to insulin.) But while numerous medications are available, none of them decisively halt progression, much less reverse the disease’s course.

About two-and-a-half years ago, Stanford data-mining maven Atul Butte, MD, PhD, combed huge publicly available databases, pooled results from numerous studies and, using big-data statistical methods, fished out a gene that had every possibility of being an important player in type 2 diabetes, but had been totally overlooked. (For more info, see this news release.) Called CD44,  this gene is especially active in fat tissue of insulin-resistant people and, Butte’s study showed, had a strong statistical connection to type-2 diabetes.

Butte’s study suggested that CD44′s link to type-2 diabetes was not just statistical but causal: In other words, manipulating the protein CD44 codes for might influence the course of the disease. By chance, that protein has already been much studied by immunologists for totally unrelated reasons. The serendipitous result is that a monoclonal antibody that binds to the protein and inhibits its action was already available.

So, Butte and his colleagues used that antibody in tests they performed on lab mice bioengineered to be extremely susceptible to type-2 diabetes, or what passes for it in a mouse. And, it turns out, the CD44-impairing antibody performed comparably to or better than two workhorse diabetes medications (metformin and pioglitazone) in countering several features of type 2 diabetes, including fatty liver, high blood sugar, weight gain and insulin resistance. The results appear in a study published today in the journal Diabetes.

Most exciting of all: In targeting CD44, the monoclonal antibody was working quite differently from any of the established drugs used for type-2 diabetes.

These are still early results, which will have to be replicated and – one hopes – improved on, first in other animal studies and finally in a long stretch of clinical trials before any drug aimed at CD44 can join the pantheon of type-2 diabetes medications. In any case, for a number of reasons the monoclonal antibody Butte’s team pitted against CD44 is far from perfect for clinical purposes. But refining initial “prototypes” is standard operating procedure for drug developers. So here’s hoping a star is born.

Previously: Newly identified type-2 diabetes gene’s odds of being a false finding equal one in 1 followed by 19 zeroes, Nature/nurture study of type-2 diabetes risk unearths carrots as potential risk reducers and Mining medical discoveries from a mountain of ones and zeroes
Photo by Dan-Scape.co.uk

Big data, Research, Science, Stanford News, Technology

Gamers: The new face of scientific research?

Gamers: The new face of scientific research?

gamerMuch has been written about the lack of reproducibility of results claimed by even well-meaning, upright scientists. Notably, a 2005 PLoS paper (by Stanford health-research policy expert John Ioannidis, MD, DSci) with the unforgettable title, “Why Most Published Research Findings Are False”, has been viewed more than a million times.

Who knew that relief could come in the form of hordes of science-naive gamers?

The notion of crowdsourcing difficult scientific problems is no longer breaking news. A few years ago I wrote a story about Stanford biochemist Rhiju Das, PhD, who was using an interactive online videogame called EteRNA he’d co-invented to come up with potential structures for RNA molecules.

RNA is a wiggly wonder. Chemically similar to DNA but infinitely more flexible and mobile, RNA can and does perform all kinds of critical tasks within every living cell. Scientists are discovering more about RNA’s once-undreamed of versatility on a steady basis. RNA may even have been around before DNA was, making it the precursor that gave rise to all life on our planet.

But EteRNA gamers need know nothing about RNA, or even about biology. They just need to be puzzle-solvers willing to learn and follow the rules of the game. Competing players’ suggested structures for a given variety of RNA molecule are actually tested in Das’s laboratory to see whether they, indeed, stably fold into the predicted structures.

More than 150,000 gamers have registered on EteRNA; at any given moment, there are about 40 active players plugging away at a solution. Several broadly similar games devoted to pursuing biological insights through crowdsourcing  are also up and running.

Das and EteRNA’s co-inventor, Adrien Treuille, PhD, (now at Carnegie Mellon University) think the gaming approach to biology offers some distinct – and to many scientists, perhaps unexpected – advantages over the more-traditional scientific method by which scientists solve problems: form a hypothesis, rigorously test it in your lab under controlled conditions, and keep it all to yourself until you at last submit your methods, data and conclusions to a journal for peer review and, if all goes well, publication.

In this “think piece” article in Trends in Biochemical Sciences,  Treuille and Das write:

Despite an elaborate peer review system, issues such as data manipulation, lack of reproducibility, lack of predictive tests, and cherry-picking among numerous unreported data occur frequently and, in some fields, may be pervasive.

There is an inherent hint of bias, the authors note, in the notion of fitting one’s data to a hypothesis: It’s always tempting to report or emphasize only data that fits your hypothesis or, conversely, look at the data you’ve produced and then tailor the “hypothesis” accordingly (thereby presenting a “proof” that may never be independently and rigorously tested experimentally).

Das and Treuille argue that the “open laboratory” nature of online games prevents data manipulation, allows rapid tests of reproducibility, and “requires rigorous adherence to the scientific method: a nontrivial prediction or hypothesis must precede each experiment.”

Das says, “It only recently hit us that EteRNA, despite being a game, is an unusually rigorous way to do science.”

Previously: John Ioaniddis discusses the popularity of his paper examining the reliability of scientific researchHow a community of online gamers is changing basic biomedical researchParamecia PacMan: Researchers create video games using living organisms and Mob science: Video game, EteRNA, lets amateurs advance RNA research
Photo by Radly J Phoenix

Big data, In the News, Patient Care, Pediatrics, Stanford News

Examining the potential of big data to transform health care

Examining the potential of big data to transform health care

Back in 2011, rheumatologist Jennifer Frankovich, MD, and colleagues at Lucile Packard Children’s Hospital Stanford used aggregate patient data from electronic medical records in making a difficult and quick decision in the care of a 13-year-old girl with a rare disease.

Today on San Francisco’s KQED, Frankovich discusses the unusual case and the potential of big data to transform the practice of medicine. Stanford systems-medicine chief Atul Butte, MD, PhD, also weighed in on the topic in the segment by saying, “The idea here is [that] the scientific method itself is growing obsolete.” More from the piece:

Big data is more than medical records and environmental data, Butte says. It could (or already does) include the results of every clinical trial that’s ever been done, every lab test, Google search, tweet. The data from your fitBit.

Eventually, the challenge won’t be finding the data, it’ll be figuring out how to organize it all. “I think the computational side of this is, let’s try to connect everything to everything,” Butte says.

Frankovich agrees with Butte, noting that developing systems to accurately interpret genetic, medical or other health metrics is key if such practices are going to become the standard model of care.

Previously: How efforts to mine electronic health records influence clinical care, NIH Director: “Big Data should inspire us”, Chief technology officer of the United States to speak at Big Data in Biomedicine conference and A new view of patient data: Using electronic medical records to guide treatment

Big data, Chronic Disease, Clinical Trials, Health and Fitness, Public Health

Stanford to launch Wellness Living Laboratory

Stanford to launch Wellness Living Laboratory

1200px-Female_joggers_on_foggy_Morro_Strand_State_BeachIf you’re the kind of person who wears a heart monitor while jogging, tracks your sleep with an app or meditates to lengthen your lifespan, then a new Stanford project, called WELL, just might be for you.

WELL, which stands for the Wellness Living Laboratory hasn’t started quite yet — it will launch in 2015 — but when it does, it will unleash a variety of cutting-edge tools in an effort to define health.

Health seems like a no-brainer, but it is more than the absence of disease, says John Ioannidis , MD, DSc, the head of the Stanford Prevention Research Center. Ioannidis wants to find out how people can be “more healthy than healthy.”

To do that, he secured $10 million and laid out plans for the project. WELL plans to enroll thousands of volunteers — who Ioannidis calls “citizen scientists” — in two initial locations: Santa Clara County, Calif., and China, with plans to expand to other sites in the future.

Participants may be able to select which health factors to track and to report much of their information remotely and digitally, although some in-person visits may be required. Participants will also have the opportunity to enroll in a variety of clinical trials to test various interventions, such as nutrition counseling or smoking cessation programs.

The program will focus on wellness, rather than diseases, with the hypothesis that promoting wellness thwarts diseases, Ioannidis said.

Volunteers who would rather not provide health information will also have the opportunity to benefit from access to a program-wide social networking effort that will spread news of successful practices, he said. “This outer sphere could reach out to tens of millions of people,” Ioannidis told me.  Stay tuned to learn how to sign up.

The $10 million came as an unrestricted gift to Stanford University from Amway’s Nutrilite Health Institute Wellness Fund.

Previously: Medicine X explores the relationship between mental and physical health, Stanford partnering with Google [x] and Duke to better understand the human body, New Stanford center aims to promote research excellence and Teens these days smoking less but engaging in other risky behaviors
Photo by: Mike Baird

Big data, Evolution, Genetics, In the News, Research, Science, Stanford News

Flies, worms and humans – and the modENCODE Project

Flies, worms and humans - and the modENCODE Project

It’s a big day in comparative biology. Researchers around the country, including Stanford geneticist Michael Snyder, PhD, are publishing the results of a massive collaboration meant to suss out the genomic similarities (and differences) among model organisms like the fruit fly and the laboratory roundworm. A package of four papers, which describe how these organisms control how, when and where they express certain genes to generate the cell types necessary for complex life, appears today in Nature.

From our release:

The research is an extension of the ENCODE, or Encyclopedia of DNA Elements, project that was initiated in 2003. As part of the large collaborative project, which was sponsored by the National Human Genome Research Institute, researchers published more than 4 million regulatory elements found within the human genome in 2012. Known as binding sites, these regions of DNA serve as landing pads for proteins and other molecules known as regulatory factors that control when and how genes are used to make proteins.

The new effort, known as modENCODE, brings a similar analysis to key model organisms like the fly and the worm. Snyder is the senior author of two of the papers published today describing some aspects of the modENCODE project, which has led to the publication, or upcoming publication, of more than 20 papers in a variety of journals. The Nature papers, and the modENCODE project, are summarized in a News and Views article in the journal (subscription required to access all papers).

As Snyder said in our release, “We’re trying to understand the basic principles that govern how genes are turned on and off. The worm and the fly have been the premier model organisms in biology for decades, and have provided the foundation for much of what we’ve learned about human biology. If we can learn how the rules of gene expression evolved over time, we can apply that knowledge to better understand human biology and disease.”

The researchers found that, although the broad strokes of gene regulation are shared among species, there are also significant differences. These differences may help explain why humans walk, flies fly and worms slither, for example:

The wealth of data from the modENCODE project will fuel research projects for decades to come, according to Snyder.

“We now have one of the most complete pictures ever generated of the regulatory regions and factors in several genomes,” said Snyder. “This knowledge will be invaluable to researchers in the field.”

Previously: Scientists announce the completion of the ENCODE project, a massive genome encyclopedia

Big data, Media, Stanford News

Stanford’s Big Data in Biomedicine chronicled in tweets, photos and videos

Stanford's Big Data in Biomedicine chronicled in tweets, photos and videos

we_heart_big_data

At this year’s Big Data in Biomedicine conference, a crowd of close to 500 people gathered at Stanford to discuss how advances in computational processing power and interconnectedness are changing medical research and the practice of medicine. Another 1,000 virtual attendees joined in the discussion via the live webcast, and several hundred participated in the conversation on social media.

We’ve captured a selection of the tweets, photos, videos and blog posts about the conference on the School of Medicine’s Storify page. On the page, you’ll find an interview with Philip Bourne, PhD, associate director for data science at the National Institutes of Health, talking about on the importance of “data to the biomedicine enterprise,” news stories on how big data holds the potential to improve everything from drug development to personalized medicine, and official conference photos and twitpics from attendees. You’ll also find a conference group photo and recap of the event written by my colleague Bruce Goldman.

For those of you missed the event, and for those who want to participate again, our next Big Data in Biomedicine conference has been scheduled for May 20-22, 2015.

Previously: Videos of Big Data in Biomedicine keynotes and panel discussions now available online, Rising to the challenge of harnessing big data to benefit patients and Discussing access and transparency of big data in government
Photo by Saul Bromberger

Stanford Medicine Resources: