Published by
Stanford Medicine

Category

Technology

Big data, BigDataMed15, Events, Precision health, Research, Stanford News, Technology

At Big Data in Biomedicine, Stanford’s Lloyd Minor focuses on precision health

At Big Data in Biomedicine, Stanford's Lloyd Minor focuses on precision health

Minor talking - 560

In the next decade, Stanford Medicine will lead the biomedical revolution in precision health, Dean Lloyd Minor, MD, told attendees of the final day of the Big Data in Biomedicine conference.

Involving all aspects of Stanford Medicine — including research and patient care — the focus on precision health will draw on Stanford’s existing strengths while propelling the development of new discoveries and transforming health-care delivery, Minor explained.

The choice of “precision health” rather than “precision medicine” is deliberate and a distinction that is reflective of Stanford’s leadership role. While both precision health and precision medicine are targeted and personalized, precision health is proactive, with an emphasis on maintaining health. In contrast, precision medicine is reactive, with a focus on caring for the sick. Precision health includes prediction and prevention; precision medicine involves diagnosis and treatment.

Minor used the model of a tree to describe Stanford’s focus on precision health.

Basic research and biomedical data science form the trunk, the foundation that supports the entire endeavor. Nine “biomedical platforms” form the major branches; these platforms include immunology, cancer biology and the neurosciences, among others. The tree’s leaves are its clinical core, with treatment teams in cardiac care, cancer and maternal and newborn health, for example.

The growth of the tree, its tippy top, is fueled by predictive, preventative and longitudinal care — where innovations in knowledge and care drive further changes in the future of health-care.

Minor made two key points about the tree, and its implications for research and care at Stanford.

First, the tree is big and growing. “There is room for everyone on the tree,” he said. “That is one thing that will make this plan — this tree — so powerful.”

Secondly, the tree is ever-changing. “Care will be analyzed and fed back. That’s really the true heart and meaning of the learning health-care system,” Minor said. “Every encounter is part of a much bigger whole.”

The entire effort will be fueled by big data, Minor said. To recognize its importance, and help train future leaders, Stanford Medicine also plans to create a new biomedical data science Department.

“We’re poised to lead,” Minor said. “We build upon a history of innovation, an entrepreneurial mindset, visionary faculty and students and a culture of collaboration.”

Previously: Big Data in Biomedicine conference kicks off todayStanford Medicine’s Lloyd Minor on re-conceiving medical education and Meet the medical school’s new dean: Lloyd Minor
Photo by Saul Bromberger

Big data, BigDataMed15, Events, Medicine and Society, Microbiology, Research, Technology

At Big Data in Biomedicine, Nobel laureate Michael Levitt and others talk computing and crowdsourcing

At Big Data in Biomedicine, Nobel laureate Michael Levitt and others talk computing and crowdsourcing

Levitt2Nobel laureate Michael Levitt, PhD, has been using big data since before data was big. A professor of structural biology at Stanford, Levitt’s simulations of protein structure and movement have tapped the most computing power he could access in his decades-long career.

Despite massive advances in technology, key challenges remain when using data to answer fundamental biological questions, Levitt told attendees of the second day of the Big Data in Biomedicine conference. It’s hard to translate gigabytes of data capturing a specific biological problem into a form that appeals to non-scientists. And even today’s supercomputers lack the ability to process information on the behavior of all atoms on Earth, Levitt pointed out.

Levitt’s address followed a panel discussion on computation and crowdsourcing, featuring computer-science specialists who are developing new ways to use computers to tackle biomedical challenges.

Kunle Olukotun, PhD, a Stanford professor of electrical engineering and computer science, had advice for biomedical scientists: Don’t waste your time on in-depth programming. Instead, harness the power of a domain specific language tailored to allow you to pursue your research goals efficiently.

Panelists Rhiju Das, PhD, assistant professor of biochemistry at Stanford, and Matthew Might, PhD, an associate professor of computer science at the University of Utah, have turned to the power of the crowd to solve problems. Das uses crowdsourcing to answer a universal problem (folding of RNA) and Might has used the crowd for a personal problem (his son’s rare genetic illness).

For Das, an online game called Eterna – and its players – have helped his team develop an algorithm that much more accurately predicts whether a sequence of RNA will fold correctly or not, a key step in developing treatments for diseases that use RNA such as HIV.

And for Might, crowdsourcing helped him discover other children who, like his son Bertrand, have an impaired NGLY1 gene. (His story is told in this New Yorker article.)

Panelist Eric Dishman, general manager of the Health and Life Sciences Group at Intel Corporation, offered conference attendees a reminder: Behind the technology lies a human. Heart rates, blood pressure and other biomarkers aren’t the only trends worth monitoring using technology, he said.

Behavioral traits also offer key insights into health, he explained. For example, his team has used location trackers to see which rooms elderly people spend time in. When there are too many breaks in the bathroom, or the person spends most of the day in the bedroom, health-care workers can see something is off, he said.

Action from the rest of the conference, which concludes today, is available via live-streaming and this app. You can also follow conversation on Twitter by using the hashtag #bigdatamed.

Previously: On the move: Big Data in Biomedicine goes mobile with discussion on mHealthGamers: The new face of scientific research?, Half-century climb in computer’s competence colloquially captured by Nobelist Michael Levitt and Decoding proteins using your very own super computer
Photo of Michael Levitt by Saul Bromberger

Big data, BigDataMed15, Events, Patient Care, Research, Stanford News, Technology

Experts at Big Data in Biomedicine: Bigger, better datasets and technology will benefit patients

Experts at Big Data in Biomedicine: Bigger, better datasets and technology will benefit patients

population health panelThe explosion of big data is transforming the way those in health care are diagnosing, treating and preventing disease, panelists at the Big Data in Biomedicine said on its opening day.

During a five-member panel on population health, experts outlined work that is currently being done but said even bigger datasets and better technology are needed to ramp up the benefits from digital data and to save lives.

“Using the end-of-millions to inform care for the end-of-one – that is exactly where we’re going,” said Tracy Lieu, MD, MPH, director of research at Kaiser Permanente Northern California, a health-care network that includes 21 hospitals, 8,000 physicians and 3.6 million patients. “And we think that in a population like ours, in an integrated system like ours, we are in an ideal setting to do personalized medicine.”

Stanford Medicine professor Douglas Owens, MD, director of the Center for Health Policy and Center for Primary and Outcomes Research, led the panel on Wednesday. He said that big data is also changing how research is being conducted.

“There’s been an explosion of data of all kinds: clinical data, genomics data, data about what we do and how we live,” said Owens. “And the question is how can we best use that data to improve the health of the individual and to improve the health of populations.”

Lieu said two key trends are central to medical researchers: informatics and genomics. She told attendees that Kaiser utilizes a “virtual data warehouse” with the digital data of 14 million patients dating back to 1960. But Lieu cautioned that the data are not always the means to an end, particularly if the findings are not tested and implemented.

“Sometimes we fail. And we fail when we identify a problem of interest, we make a decision to study it, we assemble the data, we analyze and interpret the results – and then we send them off to journals. So we fail to close the loop,” she said, because researchers typically don’t go beyond the publication of data.

Lieu said Kaiser is now focused on trying to close that loop. “To do that, we need the kinds of tools that you in this group and the speakers at this conference are developing,” she explained. “We need better and better technology for rapidly analyzing and aggregating data.”

Continue Reading »

Big data, BigDataMed15, Events, Medicine and Society, Research, Technology

On the move: Big Data in Biomedicine goes mobile with discussion on mHealth

On the move: Big Data in Biomedicine goes mobile with discussion on mHealth

17910585102_33293fefe7_zIda Sim, MD, PhD, would like to prescribe data as easily as she orders a blood test or a prescription for antibiotics. Sim, a professor of medicine at the University of California-San Francisco, told attendees of a Big Data in Biomedicine panel on mHealth yesterday afternoon that she doesn’t want access to data collected willy-nilly, with little regard for the patient’s health condition or needs.

Instead, she wants to tailor data collection to the individual patient. For example, there’s no need to collect activity data for a competitive marathoner, but it would be useful for a sedentary computer programmer.

And she doesn’t care how patients collect their data; they can “bring their own device,” Sim, who also co-directs of biomedical informatics at the UCSF Clinical and Translational Sciences Institute, said.

The design of those devices is integral to the quality of the data developed, pointed out panelist Ram Fish, vice president of digital health at Samsung. He said his team starts with “small data,” making sure devices such as their Simband watch accurately records biomarkers such as blood pressure or heart rate in a single individual, before expanding it to the population level.

He said he’s most keen on developing tools that make a real difference in health, such as the detection of abnormal heart rhythms, a project still in the works.

And speaking of new tools, Stanford’s Euan Ashley, MD, PhD, associate professor of medicine and of genetics, shared some early results from the cardiovascular app MyHeart Counts, which Stanford introduced in March to great acclaim.

Ashley reported that the study has yielded information about the link between sleep patterns and happiness (those who go to bed late and get up late are less happy than others) and about geographic patterns of produce consumption (South Dakota users out-eat Californians when it comes to fruits and veggies). The project’s team is just starting to delve into some of its other findings, which include correlations between the 6-minute timed walk and overall health.

“We’re in a really new era and one we don’t really understand,” Ashley said.

Continue Reading »

Media, Medicine and Society, Technology

Upset stomachs and hurting feet: A look at how people use Twitter for health information

Upset stomachs and hurting feet: A look at how people use Twitter for health information

MedCity News ran an incredibly informative article earlier this week on how people use social media – and more specifically, Twitter – to consume and discuss health information. Reporting on a recent talk from Twitter engineer Craig Hashi at Cleveland Clinic’s ePatient Experience: Empathy + Innovation Summit, Neil Versel shared:

Some 40 percent of consumers believe that information they found on social media affects how they deal with their health, [Hashi] said. A quarter of Internet users with chronic illnesses look for people with similar health issues. And 42 percent search online for reviews of health products, treatments and providers.

Twitter processes 23,000 weekly tweets with the words “feet hurt,” and the frequency naturally increases as the day and the work week go on, though many people tweet that when they get home on Saturday night as well. “Dr. Scholl’s can actually come in and reach these people,” Hashi suggested.

“Allergy” tweets mostly occur between March and June, Hashi said. “Sunscreen” also peaks in the late spring and summer. “Uncomfortable tummies” is highest on Thanksgiving, with lesser spikes at Christmas and on Super Bowl Sunday. Hashi said that Tums advertised on Twitter around Thanksgiving.

And for those who question the value of Twitter, or don’t quite understand its place in health care, these figures might give you pause: “The volume of information available on Twitter is staggering, Hashi said. There are half a billion tweets send every day. There will be more words on Twitter in the next two years than in all books ever printed. An analysis Hashi put together found that there were 44 million cancer-related tweets in the 12 months ending in March 2015, and traffic spiked in October, which happens to be Breast Cancer Awareness Month.”

Previously: Finding asthma outbreaks using Twitter: How social media can improve disease detectionAdvice for young doctors: Embrace TwitterTwitter 101 for patientsBertalan Meskó discusses how mobile technologies can improve the delivery of health care and What to think about when using social media for health information

Big data, BigDataMed15, Events, Genetics, Research, Technology

Big Data in Biomedicine panelists: Genomics’ future is bright, thanks to data-science tools

Big Data in Biomedicine panelists: Genomics' future is bright, thanks to data-science tools

Jill HagenkordStanford’s annual Big Data in Biomedicine began this morning with a “breathtaking set of talks,” as described by Russ Altman, MD, PhD, a Stanford professor of bioengineering, genetics and of medicine.

The first panel focused on genomics, with the speakers presenting a dizzying forecast of a future where biomedical data is standardized and easily accessible to researchers, yet carefully guarded to protect privacy.

“How do we build this in a way that allows you to spend time working on your science, and not spend your time to worry about reinventing the plumbing?,” asked David Glazer, director of engineering at Google and a speaker on the panel.

His team is hard at work ensuring the infrastructure of the Google Cloud Platform can withstand the rigorous demands of a slew of big data projects, including the Million Veteran Program and MSSNG, an effort to understand the genetics of autism.

For panelist Heidi Rehm, PhD, associate professor of pathology at Harvard Medical School and director of the Partners Laboratory for Molecular Medicine, a key hurdle is standardizing definitions and ensuring that supporting evidence is available for system users. For example, data developers should be able to demonstrate why a particular gene variant has been deemed benign, and what definition of “benign” they are using, she said.

Her team has developed a star system, which rates sources of data by their credibility, giving results submitted by expert panels more stars than data submitted by a single researcher.

Rehm also addressed the pros and cons of various models to share data. Rather than collecting it all centrally, she said she expects data will be shared through a small number of hubs, which each have the ability to connect with each other, similar to an airline trafficking model.

Individuals are not standing in the way of research advances, reported panelist Jill Hagenkord, MD, chief medical officer of the personal genetics company 23andMe. She said that of their 950,000 customers, nearly 80 percent have agreed to share their data for research. Participants are also eager to provide additional information when asked, Hagenkord said. It becomes almost a philanthropic effort, they feel grateful that someone is interested in their conditions, she said.

Continue Reading »

Big data, Genetics, Research, Technology, Videos

“An extremely interesting time to be a geneticist”: Using big data to identify rare diseases

"An extremely interesting time to be a geneticist": Using big data to identify rare diseases

With cheaper, faster genetic sequencing, researchers are able to pinpoint rare gene variants that may be contributing to disease.

But to find “the actual, causal rare variant contributing to the trait is like looking for a needle in a haystack,” says Stephen Montgomery, PhD, in the video above.

Montgomery and his team have plans to boost the efficacy of using genome sequencing to identify rare diseases by incorporating all of the information from genes that are actually turned on — using RNA in addition to its parent DNA to make that needle really stand out.

Eventually, Montgomery hopes to mix in even more information including details about individual lifestyles, environmental exposures and family histories to glean further insights into the origins of rare disease. His team received a 2014 Big Data for Human Health Seed Grant to support the work.

“We’re going to be able to answer very quickly questions about how the genome is influencing our lives and then we’re also going to be able to treat (these conditions),” Montgomery says. “This is an extremely interesting time to be a geneticist and these large data sets are just empowering a large number of discoveries.”

This effort is part of Stanford Medicine’s Biomedical Data Science Initiative (BDSI), which strives to make powerful transformations in human health and scientific discovery by fostering innovative collaborations among medical researchers, computer scientists, statisticians and physicians. Work being done in this area is the focus of Stanford’s Big Data in Biomedicine conference, which kicks off tomorrow morning.

Previously: Collecting buried biomedical treasure – using big data, All data – big and small – informs large-scale neuroscience project, Registration for Big Data in Biomedicine conference now open, Parent details practical ways to get care and support for your child’s rare disease, New search engine designed to help physicians and the public in diagnosing rare diseases and Big data used to help identify patients at risk of deadly high-cholesterol disorder

Patient Care, Pediatrics, Research, Stanford News, Technology

A new tool for tracking harm in hospitalized children

A new tool for tracking harm in hospitalized children

Medical-chartsIn the 15 years since the Institute of Medicine issued its groundbreaking report showing frequent harm caused by medical care, researchers have worked to devise efficient, reliable ways to detect harm to patients. Finding out what aspects of care most often hurt patients is a key step in reducing these harms, but voluntary reports, in which caregivers are asked to document harm they cause, only identify a small percentage of total harms.

New research published today in Pediatrics describes a better approach for tracking harm to kids in hospitals. Using the system on 600 medical charts from six U.S. children’s hospitals, the researchers found that almost 25 percent of patients included in the chart review had experienced at least one harm, and that 45 percent of these harms were probably preventable. The approach, called a “trigger tool,” was based on a similar harm-tracking method designed for hospitalized adult patients. Researchers look at each medical chart for “triggers” – events or lab measurements often associated with harm – and when they find a trigger, explore the medical chart in detail around the time of the trigger to see if harm occurred.

“This tool will allow us to better understand the epidemiology of harm in hospitalized children, as well as give us the capacity to track harms over time to determine if our interventions are making an improvement,” said senior study author Paul Sharek, MD, an associate professor of pediatrics and chief clinical patient safety officer at Lucile Packard Children’s Hospital Stanford and Stanford Children’s Health. He collaborated with scientists from several other institutions on the research.

I talked with Sharek last week about the study’s findings and implications. To start, I asked him to give me an example that would help me understand the difference between preventable and non-preventable harm. A child who receives a medication that provokes an allergic reaction has experienced a non-preventable harm if it’s the first time the child ever got the drug, and there were no clues beforehand that she had the allergy, he told me. But if the drug allergy was already known and the patient got the drug anyway and had an allergic reaction, that is a preventable harm.

The high rate of preventable harms shows that there is a lot of room to make all hospitals safer for kids, Sharek said. One surprise in the data was that nine common healthcare-acquired conditions that have been targeted by national safety efforts – including central line-associated bloodstream infections, ventilator-associated pneumonia and surgical site infections – together accounted for only 4 percent of all harms identified in this study. “If we were able to eliminate every one of these, according to these data, we’d still be left with 96 percent of the harms we identified,” Sharek said.

Continue Reading »

Big data, In the News, Technology

Vinod Khosla shares thoughts on disrupting health care with data science

Vinod Khosla shares thoughts on disrupting health care with data science

14252833785_63316bba75_zProminent Silicon Valley venture capitalist Vinod Khosla is a strong believer that data science will reinvent health care as we know it – and it’s position he has reiterated on a number of occasions, including at the 2014 Big Data in Biomedicine conference at Stanford. In a recently published Washington Post Q&A, Khosla expands on his comment that over the next ten years “data science and software will do more for medicine than all of the biological sciences together.”

On the topic of books and papers that have influenced his views, Khosla said:

A lot of what I’ve been thinking about started with articles by Dr. John Ioannidis at Stanford School of Medicine. What he found through decades of meta-research is that half of what’s in medical studies is just plain wrong… His research is focused on why they are wrong and why all sorts of biases are introduced in medical studies and medical practice.

He also explains one of the reasons he believes innovation in data science and software is outpacing the biological sciences:

The pace of innovation in software, across all industries, has consistently been much faster than anything else. Within traditional health-care innovation (which intersects with “biological sciences”) such as the pharma industry, there are a lot of good reasons those cycles of innovation are slow.

It takes 10 to 15 years to develop a drug and actually be in the marketplace, with an incredibly high failure rate. Safety is one big issue, so I don’t blame the process. I think it’s warranted and the [Food and Drug Administration] is appropriately cautious. But because digital health often has fewer safety effects, and iterations can happen in 2- to 3-year cycles, the rate of innovation goes up substantially.

Previously: Countdown to Big Data in Biomedicine: Leveraging big data technology to advance genomicsCountdown to Big Data in Biomedicine: Mining medical records to identify patterns in public health, Collecting buried biomedical treasure – using big data, Big data used to help identify patients at risk of deadly high-cholesterol disorder and Examining the potential of big data to transform health care
Photo of Khosla at the 2014 Big Data in Biomedicine conference by Saul Bromberger

AHCJ15, Applied Biotechnology, Imaging, Mental Health, Neuroscience, Technology

Talking about “mouseheimers,” and a call for new neuroscience technologies

Talking about "mouseheimers," and a call for new neuroscience technologies

3723710203_1b8c9d96ed_zOur ability to technologically assess the brain has room for improvement, according to panelists at the recent Association of Health Care Journalism 2015 conference. Amit Etkin, PhD, MD, neuropsychiatrist and psychiatrist at Stanford, summed it up when he said, “We need to develop tools to answer questions we want to ask, rather than ask questions we can answer with the tools we have.”

Etkin asserted that there have been no fundamental advances in psychiatry since 1987; all the medications put out now are basically the same, and the treatments work partially, sometimes, and for only some people. Interdisciplinary work combining psychiatry, neuroscience, and radiology is the frontier: Researchers are just getting a sense of how “interventional neuroscience,” such as that pioneered at the interdisciplinary NeuroCircuit initiative at the Stanford Neurosciences Institute, can identify which brain regions control various processes. This involves looking at brain signatures that are common across disorders, instead of dividing and parsing symptoms, which is the approach of the Diagnostic and Statistical Manual of Mental Disorders.

Researchers are searching for an ideal marker for Alzheimer’s: something predictive (will you get the disease?), diagnostic (do you have the disease?), and dynamic (how severe is your disease right now?)

Michael Greicius, MD, MPH, professor of neurology and neurological sciences at Stanford, researches Alzheimer’s and has a bone to pick with media hype about Alzheimer’s research conducted in mice. What the mice have shouldn’t be considered the same condition, he says, so he’s termed it “mouseheimer’s.” Only 2 percent of the Alzheimer’s population has the dominant, inherited, exceedingly potent genetic form, which is the form used in research on rodents. Further, the mice are double or even triple transgenic. We still use these improbable biological hosts because we need an artificial model: Alzheimer’s is really just a human thing, and even great apes don’t get it. The next best modeling possibility, he suggested, are flies.

Continue Reading »

Stanford Medicine Resources: