Published by
Stanford Medicine

Category

Big data

Big data, BigDataMed15, Events, Precision health, Research, Stanford News, Technology

At Big Data in Biomedicine, Stanford’s Lloyd Minor focuses on precision health

At Big Data in Biomedicine, Stanford's Lloyd Minor focuses on precision health

Minor talking - 560

In the next decade, Stanford Medicine will lead the biomedical revolution in precision health, Dean Lloyd Minor, MD, told attendees of the final day of the Big Data in Biomedicine conference.

Involving all aspects of Stanford Medicine — including research and patient care — the focus on precision health will draw on Stanford’s existing strengths while propelling the development of new discoveries and transforming health-care delivery, Minor explained.

The choice of “precision health” rather than “precision medicine” is deliberate and a distinction that is reflective of Stanford’s leadership role. While both precision health and precision medicine are targeted and personalized, precision health is proactive, with an emphasis on maintaining health. In contrast, precision medicine is reactive, with a focus on caring for the sick. Precision health includes prediction and prevention; precision medicine involves diagnosis and treatment.

Minor used the model of a tree to describe Stanford’s focus on precision health.

Basic research and biomedical data science form the trunk, the foundation that supports the entire endeavor. Nine “biomedical platforms” form the major branches; these platforms include immunology, cancer biology and the neurosciences, among others. The tree’s leaves are its clinical core, with treatment teams in cardiac care, cancer and maternal and newborn health, for example.

The growth of the tree, its tippy top, is fueled by predictive, preventative and longitudinal care — where innovations in knowledge and care drive further changes in the future of health-care.

Minor made two key points about the tree, and its implications for research and care at Stanford.

First, the tree is big and growing. “There is room for everyone on the tree,” he said. “That is one thing that will make this plan — this tree — so powerful.”

Secondly, the tree is ever-changing. “Care will be analyzed and fed back. That’s really the true heart and meaning of the learning health-care system,” Minor said. “Every encounter is part of a much bigger whole.”

The entire effort will be fueled by big data, Minor said. To recognize its importance, and help train future leaders, Stanford Medicine also plans to create a new biomedical data science Department.

“We’re poised to lead,” Minor said. “We build upon a history of innovation, an entrepreneurial mindset, visionary faculty and students and a culture of collaboration.”

Previously: Big Data in Biomedicine conference kicks off todayStanford Medicine’s Lloyd Minor on re-conceiving medical education and Meet the medical school’s new dean: Lloyd Minor
Photo by Saul Bromberger

Big data, BigDataMed15, Events, Medicine and Society, Microbiology, Research, Technology

At Big Data in Biomedicine, Nobel laureate Michael Levitt and others talk computing and crowdsourcing

At Big Data in Biomedicine, Nobel laureate Michael Levitt and others talk computing and crowdsourcing

Levitt2Nobel laureate Michael Levitt, PhD, has been using big data since before data was big. A professor of structural biology at Stanford, Levitt’s simulations of protein structure and movement have tapped the most computing power he could access in his decades-long career.

Despite massive advances in technology, key challenges remain when using data to answer fundamental biological questions, Levitt told attendees of the second day of the Big Data in Biomedicine conference. It’s hard to translate gigabytes of data capturing a specific biological problem into a form that appeals to non-scientists. And even today’s supercomputers lack the ability to process information on the behavior of all atoms on Earth, Levitt pointed out.

Levitt’s address followed a panel discussion on computation and crowdsourcing, featuring computer-science specialists who are developing new ways to use computers to tackle biomedical challenges.

Kunle Olukotun, PhD, a Stanford professor of electrical engineering and computer science, had advice for biomedical scientists: Don’t waste your time on in-depth programming. Instead, harness the power of a domain specific language tailored to allow you to pursue your research goals efficiently.

Panelists Rhiju Das, PhD, assistant professor of biochemistry at Stanford, and Matthew Might, PhD, an associate professor of computer science at the University of Utah, have turned to the power of the crowd to solve problems. Das uses crowdsourcing to answer a universal problem (folding of RNA) and Might has used the crowd for a personal problem (his son’s rare genetic illness).

For Das, an online game called Eterna – and its players – have helped his team develop an algorithm that much more accurately predicts whether a sequence of RNA will fold correctly or not, a key step in developing treatments for diseases that use RNA such as HIV.

And for Might, crowdsourcing helped him discover other children who, like his son Bertrand, have an impaired NGLY1 gene. (His story is told in this New Yorker article.)

Panelist Eric Dishman, general manager of the Health and Life Sciences Group at Intel Corporation, offered conference attendees a reminder: Behind the technology lies a human. Heart rates, blood pressure and other biomarkers aren’t the only trends worth monitoring using technology, he said.

Behavioral traits also offer key insights into health, he explained. For example, his team has used location trackers to see which rooms elderly people spend time in. When there are too many breaks in the bathroom, or the person spends most of the day in the bedroom, health-care workers can see something is off, he said.

Action from the rest of the conference, which concludes today, is available via live-streaming and this app. You can also follow conversation on Twitter by using the hashtag #bigdatamed.

Previously: On the move: Big Data in Biomedicine goes mobile with discussion on mHealthGamers: The new face of scientific research?, Half-century climb in computer’s competence colloquially captured by Nobelist Michael Levitt and Decoding proteins using your very own super computer
Photo of Michael Levitt by Saul Bromberger

Big data, BigDataMed15, Events, Patient Care, Research, Stanford News, Technology

Experts at Big Data in Biomedicine: Bigger, better datasets and technology will benefit patients

Experts at Big Data in Biomedicine: Bigger, better datasets and technology will benefit patients

population health panelThe explosion of big data is transforming the way those in health care are diagnosing, treating and preventing disease, panelists at the Big Data in Biomedicine said on its opening day.

During a five-member panel on population health, experts outlined work that is currently being done but said even bigger datasets and better technology are needed to ramp up the benefits from digital data and to save lives.

“Using the end-of-millions to inform care for the end-of-one – that is exactly where we’re going,” said Tracy Lieu, MD, MPH, director of research at Kaiser Permanente Northern California, a health-care network that includes 21 hospitals, 8,000 physicians and 3.6 million patients. “And we think that in a population like ours, in an integrated system like ours, we are in an ideal setting to do personalized medicine.”

Stanford Medicine professor Douglas Owens, MD, director of the Center for Health Policy and Center for Primary and Outcomes Research, led the panel on Wednesday. He said that big data is also changing how research is being conducted.

“There’s been an explosion of data of all kinds: clinical data, genomics data, data about what we do and how we live,” said Owens. “And the question is how can we best use that data to improve the health of the individual and to improve the health of populations.”

Lieu said two key trends are central to medical researchers: informatics and genomics. She told attendees that Kaiser utilizes a “virtual data warehouse” with the digital data of 14 million patients dating back to 1960. But Lieu cautioned that the data are not always the means to an end, particularly if the findings are not tested and implemented.

“Sometimes we fail. And we fail when we identify a problem of interest, we make a decision to study it, we assemble the data, we analyze and interpret the results – and then we send them off to journals. So we fail to close the loop,” she said, because researchers typically don’t go beyond the publication of data.

Lieu said Kaiser is now focused on trying to close that loop. “To do that, we need the kinds of tools that you in this group and the speakers at this conference are developing,” she explained. “We need better and better technology for rapidly analyzing and aggregating data.”

Continue Reading »

Big data, BigDataMed15, Events, Medicine and Society, Research, Technology

On the move: Big Data in Biomedicine goes mobile with discussion on mHealth

On the move: Big Data in Biomedicine goes mobile with discussion on mHealth

17910585102_33293fefe7_zIda Sim, MD, PhD, would like to prescribe data as easily as she orders a blood test or a prescription for antibiotics. Sim, a professor of medicine at the University of California-San Francisco, told attendees of a Big Data in Biomedicine panel on mHealth yesterday afternoon that she doesn’t want access to data collected willy-nilly, with little regard for the patient’s health condition or needs.

Instead, she wants to tailor data collection to the individual patient. For example, there’s no need to collect activity data for a competitive marathoner, but it would be useful for a sedentary computer programmer.

And she doesn’t care how patients collect their data; they can “bring their own device,” Sim, who also co-directs of biomedical informatics at the UCSF Clinical and Translational Sciences Institute, said.

The design of those devices is integral to the quality of the data developed, pointed out panelist Ram Fish, vice president of digital health at Samsung. He said his team starts with “small data,” making sure devices such as their Simband watch accurately records biomarkers such as blood pressure or heart rate in a single individual, before expanding it to the population level.

He said he’s most keen on developing tools that make a real difference in health, such as the detection of abnormal heart rhythms, a project still in the works.

And speaking of new tools, Stanford’s Euan Ashley, MD, PhD, associate professor of medicine and of genetics, shared some early results from the cardiovascular app MyHeart Counts, which Stanford introduced in March to great acclaim.

Ashley reported that the study has yielded information about the link between sleep patterns and happiness (those who go to bed late and get up late are less happy than others) and about geographic patterns of produce consumption (South Dakota users out-eat Californians when it comes to fruits and veggies). The project’s team is just starting to delve into some of its other findings, which include correlations between the 6-minute timed walk and overall health.

“We’re in a really new era and one we don’t really understand,” Ashley said.

Continue Reading »

Big data, BigDataMed15, Events, Genetics, Research, Technology

Big Data in Biomedicine panelists: Genomics’ future is bright, thanks to data-science tools

Big Data in Biomedicine panelists: Genomics' future is bright, thanks to data-science tools

Jill HagenkordStanford’s annual Big Data in Biomedicine began this morning with a “breathtaking set of talks,” as described by Russ Altman, MD, PhD, a Stanford professor of bioengineering, genetics and of medicine.

The first panel focused on genomics, with the speakers presenting a dizzying forecast of a future where biomedical data is standardized and easily accessible to researchers, yet carefully guarded to protect privacy.

“How do we build this in a way that allows you to spend time working on your science, and not spend your time to worry about reinventing the plumbing?,” asked David Glazer, director of engineering at Google and a speaker on the panel.

His team is hard at work ensuring the infrastructure of the Google Cloud Platform can withstand the rigorous demands of a slew of big data projects, including the Million Veteran Program and MSSNG, an effort to understand the genetics of autism.

For panelist Heidi Rehm, PhD, associate professor of pathology at Harvard Medical School and director of the Partners Laboratory for Molecular Medicine, a key hurdle is standardizing definitions and ensuring that supporting evidence is available for system users. For example, data developers should be able to demonstrate why a particular gene variant has been deemed benign, and what definition of “benign” they are using, she said.

Her team has developed a star system, which rates sources of data by their credibility, giving results submitted by expert panels more stars than data submitted by a single researcher.

Rehm also addressed the pros and cons of various models to share data. Rather than collecting it all centrally, she said she expects data will be shared through a small number of hubs, which each have the ability to connect with each other, similar to an airline trafficking model.

Individuals are not standing in the way of research advances, reported panelist Jill Hagenkord, MD, chief medical officer of the personal genetics company 23andMe. She said that of their 950,000 customers, nearly 80 percent have agreed to share their data for research. Participants are also eager to provide additional information when asked, Hagenkord said. It becomes almost a philanthropic effort, they feel grateful that someone is interested in their conditions, she said.

Continue Reading »

Big data, BigDataMed15, Events, Public Health, Research

Big Data in Biomedicine conference kicks off today

Big Data in Biomedicine conference kicks off today

14243103692_67ec6354f0_zThe third annual Big Data in Biomedicine conference kicks off today on the Stanford campus. The three-day event brings together thought leaders from academia, information technology companies, venture capital firms and public health institutions to explore opportunities for extracting knowledge from the rapidly growing reservoirs of health and medical information to transform how we diagnose, treat and prevent disease.

The year’s program will cover the intersection of disciplines as widespread as genomics, population health, neuroimaging and immunology; it will also touch on crowdsourcing, ethical and legal issues and “learning” health systems. Delivering the opening keynote will be Sharon Terry, president and CEO of Genetic Alliance. Other keynote speakers include Kathy Hudson, PhD, deputy director for science, outreach and policy at the National Institutes of Health; France Córdova, PhD, director of the National Science Foundation; Michael Levitt, PhD, professor of structural biology at Stanford and recipient of the 2013 Nobel Prize in Chemistry; and Lloyd Minor, MD, dean of Stanford’s School of Medicine.

Those unable to attend in person can tune in to the live webcast via the conference website. We’ll also be live tweeting the keynote talks and other proceedings from the conference; you can follow the coverage on the @StanfordMed feed or by using the hashtag #bigdatamed.

Previously: Countdown to Big Data in Biomedicine: Leveraging big data technology to advance genomics, Countdown to Big Data in Biomedicine: Mining medical records to identify patterns in public health and Harnessing mobile health technologies to transform human health
Photo from the 2014 Big Data in Biomedicine conference by Saul Bromberger

Big data, Genetics, Research, Technology, Videos

“An extremely interesting time to be a geneticist”: Using big data to identify rare diseases

"An extremely interesting time to be a geneticist": Using big data to identify rare diseases

With cheaper, faster genetic sequencing, researchers are able to pinpoint rare gene variants that may be contributing to disease.

But to find “the actual, causal rare variant contributing to the trait is like looking for a needle in a haystack,” says Stephen Montgomery, PhD, in the video above.

Montgomery and his team have plans to boost the efficacy of using genome sequencing to identify rare diseases by incorporating all of the information from genes that are actually turned on — using RNA in addition to its parent DNA to make that needle really stand out.

Eventually, Montgomery hopes to mix in even more information including details about individual lifestyles, environmental exposures and family histories to glean further insights into the origins of rare disease. His team received a 2014 Big Data for Human Health Seed Grant to support the work.

“We’re going to be able to answer very quickly questions about how the genome is influencing our lives and then we’re also going to be able to treat (these conditions),” Montgomery says. “This is an extremely interesting time to be a geneticist and these large data sets are just empowering a large number of discoveries.”

This effort is part of Stanford Medicine’s Biomedical Data Science Initiative (BDSI), which strives to make powerful transformations in human health and scientific discovery by fostering innovative collaborations among medical researchers, computer scientists, statisticians and physicians. Work being done in this area is the focus of Stanford’s Big Data in Biomedicine conference, which kicks off tomorrow morning.

Previously: Collecting buried biomedical treasure – using big data, All data – big and small – informs large-scale neuroscience project, Registration for Big Data in Biomedicine conference now open, Parent details practical ways to get care and support for your child’s rare disease, New search engine designed to help physicians and the public in diagnosing rare diseases and Big data used to help identify patients at risk of deadly high-cholesterol disorder

Big data, Events, Stanford News

Countdown to Big Data in Biomedicine: Technical showcase to spotlight companies’ innovations

Countdown to Big Data in Biomedicine: Technical showcase to spotlight companies' innovations

14222209716_d3072f7737_zLater this week, thought-leaders from academia, information technology corporations, venture capital firms, the U.S. government and foundations will convene for the Big Data in Biomedicine conference to explore opportunities for mining the rich repositories of biomedical information.

In addition to sessions on topics ranging from crowdsourcing to genomics, the conference will include a technical showcase where conference-goers can peruse displays and demos highlighting companies’ innovations related to big data. Part technology expo and part networking opportunity, the technical showcase will include light refreshments and be held under a tent on the lawn of the medical school’s Li Ka Shing Center for Learning and Knowledge.

Participants for this year’s event include advanced patient monitoring firm Flashback Technologies, which will present an innovative index for body-fluid levels in trauma situations and a device to do it on the spot; Zephyr Health, a company that pairs real-world data with predictive analytics to provide insights that are strategic and actionable; Samsung, which will show how the company’s personal devices are moving into human health solutions; and Personalis, a startup providing researchers and clinicians with accurate DNA sequencing and interpretation of human exomes and genomes.

The conference is part of Stanford Medicine’s Biomedical Data Science Initiative, which strives to make powerful transformations in human health and scientific discovery by fostering innovative collaborations among medical researchers, computer scientists, statisticians and physicians. The event runs from Wednesday through Friday.

Previously: Countdown to Big Data in Biomedicine: Leveraging big data technology to advance genomics, Countdown to Big Data in Biomedicine: Mining medical records to identify patterns in public health and Harnessing mobile health technologies to transform human health
Photo from last year’s technical showcase by Saul Bromberger

Big data, Emergency Medicine, Genetics, Infectious Disease, Research, Stanford News

Study means an early, accurate, life-saving sepsis diagnosis could be coming soon

Study means an early, accurate, life-saving sepsis diagnosis could be coming soon

image.img.320.highA blood test for quickly and accurately detecting sepsis, a deadly immune-system panic attack set off when our body wildly overreacts to the presence of infectious pathogens, may soon be at hand.

Sepsis is the leading cause of hospital deaths in the United States and is tied to the early deaths of at least 750,000 Americans each year. Usually caused by bacterial rather than viral infections, this intense, dangerous and rapidly progressing whole-body inflammatory syndrome is best treated with antibiotics.

The trouble is, sepsis is exceedingly difficult to distinguish from its non-infectious doppelganger: an outwardly similar but pathogen-free systemic syndrome called sterile inflammation, which can arise in response to traumatic injuries, surgery, blood clots or other noninfectious causes.

In a recent news release, I wrote:

[H]ospital clinicians are pressured to treat anybody showing signs of systemic inflammation with antibiotics. That can encourage bacterial drug resistance and, by killing off harmless bacteria in the gut, lead to colonization by pathogenic bacteria, such as Clostridium difficile.

Not ideal. When a patient has a sterile inflammation, antibiotics not only don’t help but are counterproductive. However, the occasion for my news release was the identification, by Stanford biomedical informatics wizard Purvesh Khatri, PhD, and his colleagues, of a tiny set of genes that act differently under the onslaught of sepsis from they way they behave when a patient is undergoing sterile inflammation instead.

In a study published in Science Translational Medicine, Khatri’s team pulled a needle out of a haystack – activity levels of more than 80 percent of all of a person’s genes change markedly, and in a chaotically fluctuating manner over time, in response to both sepsis and sterile inflammation. To cut through the chaos, the investigators applied some clever analytical logic to a “big data” search of gene-activity results on more than 2,900 blood samples from nearly 1,600 patients in 27 different data sets containing medical information on diverse patient groups: men and women, young and old, some suffering from sterile inflammation and other experiencing sepsis,  and (as a control) healthy people.

The needle that emerged from that 20,000-gene-strong haystack of haywire fluctuations in gene activity consisted of an 11-gene “signature” that, Khatri thinks, could serve up a speedy, sensitive, and specific diagnosis of sepsis in the form of a simple blood test.

The 11-gene blood test still has to be validated by independent researchers, licensed to manufacturers, and approved by the FDA. Let’s hope for smooth sailing. Every hour saved in figuring out a possible sepsis sufferer’s actual condition represents, potentially, thousands of lives saved annually in the United States alone, not to mention billions of dollars in savings to the U.S. health-care system.

Previously: Extracting signal from noise to combat organ rejection and Can battling sepsis in a game improve the odds for material world wins?
Photo by Lightspring/Shutterstock

Big data, In the News, Technology

Vinod Khosla shares thoughts on disrupting health care with data science

Vinod Khosla shares thoughts on disrupting health care with data science

14252833785_63316bba75_zProminent Silicon Valley venture capitalist Vinod Khosla is a strong believer that data science will reinvent health care as we know it – and it’s position he has reiterated on a number of occasions, including at the 2014 Big Data in Biomedicine conference at Stanford. In a recently published Washington Post Q&A, Khosla expands on his comment that over the next ten years “data science and software will do more for medicine than all of the biological sciences together.”

On the topic of books and papers that have influenced his views, Khosla said:

A lot of what I’ve been thinking about started with articles by Dr. John Ioannidis at Stanford School of Medicine. What he found through decades of meta-research is that half of what’s in medical studies is just plain wrong… His research is focused on why they are wrong and why all sorts of biases are introduced in medical studies and medical practice.

He also explains one of the reasons he believes innovation in data science and software is outpacing the biological sciences:

The pace of innovation in software, across all industries, has consistently been much faster than anything else. Within traditional health-care innovation (which intersects with “biological sciences”) such as the pharma industry, there are a lot of good reasons those cycles of innovation are slow.

It takes 10 to 15 years to develop a drug and actually be in the marketplace, with an incredibly high failure rate. Safety is one big issue, so I don’t blame the process. I think it’s warranted and the [Food and Drug Administration] is appropriately cautious. But because digital health often has fewer safety effects, and iterations can happen in 2- to 3-year cycles, the rate of innovation goes up substantially.

Previously: Countdown to Big Data in Biomedicine: Leveraging big data technology to advance genomicsCountdown to Big Data in Biomedicine: Mining medical records to identify patterns in public health, Collecting buried biomedical treasure – using big data, Big data used to help identify patients at risk of deadly high-cholesterol disorder and Examining the potential of big data to transform health care
Photo of Khosla at the 2014 Big Data in Biomedicine conference by Saul Bromberger

Stanford Medicine Resources: