Published by
Stanford Medicine

Category

Big data

Big data, Media, Stanford News

Stanford’s Big Data in Biomedicine chronicled in tweets, photos and videos

Stanford's Big Data in Biomedicine chronicled in tweets, photos and videos

we_heart_big_data

At this year’s Big Data in Biomedicine conference, a crowd of close to 500 people gathered at Stanford to discuss how advances in computational processing power and interconnectedness are changing medical research and the practice of medicine. Another 1,000 virtual attendees joined in the discussion via the live webcast, and several hundred participated in the conversation on social media.

We’ve captured a selection of the tweets, photos, videos and blog posts about the conference on the School of Medicine’s Storify page. On the page, you’ll find an interview with Philip Bourne, PhD, associate director for data science at the National Institutes of Health, talking about on the importance of “data to the biomedicine enterprise,” news stories on how big data holds the potential to improve everything from drug development to personalized medicine, and official conference photos and twitpics from attendees. You’ll also find a conference group photo and recap of the event written by my colleague Bruce Goldman.

For those of you missed the event, and for those who want to participate again, our next Big Data in Biomedicine conference has been scheduled for May 20-22, 2015.

Previously: Videos of Big Data in Biomedicine keynotes and panel discussions now available online, Rising to the challenge of harnessing big data to benefit patients and Discussing access and transparency of big data in government
Photo by Saul Bromberger

Big data, Cancer, Research, Science, Stanford News, Videos

Will hypothesis or data-driven research advance science? A Stanford biochemist weighs in

Will hypothesis or data-driven research advance science? A Stanford biochemist weighs in

The 2014 Big Data in Biomedicine conference was held here last month, and keynote speakers, panelists, moderators and attendees are now available on the Stanford Medicine YouTube channel. To continue the discussion of how big data can be harnessed to benefit human health, we’ll be featuring a selection of the videos this month on Scope.

Julia Salzman, PhD, a Stanford assistant professor of biochemistry, is concerned that significant amount of data is being thrown in the trash “because the data don’t fit our sense of what they should look like.” At Big Data in Biomedicine 2014, she explained how giving her computers a long leash led her down an unexpected path and the discovery of a new, and probably noteworthy, biological entity. My colleague Bruce Goldman highlighted her findings in a news release:

Using computational pattern-recognition software, her team discovered numerous instances in which pieces of RNA that normally are stitched together in a particular linear sequence were, instead, assembled in the “wrong” order (with what’s normally the final piece in the sequence preceding what’s normally the first piece, for example). The anomaly was resolved with the realization that what Salzman and her group were seeing were breakdown products of circular RNA — a novel conformation of the molecule.

In its circular form, she noted, an RNA molecule is much more impervious to degradation by ubiquitous RNA-snipping enzymes, so it is more likely than its linear RNA counterparts to persist in a person’s blood. Every cell in the body produces circular RNA, she said, but it seems to be produced at greater levels in many human cancer cells. While its detailed functions remain to be revealed, these features of circular RNA may position it as an excellent target for a blood test, she said.

In the above Behind the Scenes at Big Data video, Salzman discusses her work and addresses a question asked during the Single Cells to Exacycles panel: In this next era of science, will science advance mainly through hypothesis or data driven research? She comments, “I think that’s a fundamental question moving forward, whether the scientific method is dead or whether it’s still alive and kicking. I think that’s a really important question for us as to answer and deal with as scientists.” Watch the interview to find out the rest of Salzman’s thoughts on the issue.

Previously: Rising to the challenge of harnessing big data to benefit patients, Discussing access and transparency of big data in government and U.S. Chief Technology Officer kicks off Big Data in Biomedicine

Big data, Obesity, Pregnancy, Public Health, Women's Health

Maternal obesity linked to earliest premature births, says Stanford study

Maternal obesity linked to earliest premature births, says Stanford study

preemiefeetExpectant mothers who are obese before they become pregnant are at increased risk of delivering a very premature baby, according to a new study of nearly 1,000,000 California births.

The study, which appears in the July issue of Paediatric and Perinatal Epidemiology, is part of a major research effort by the March of Dimes Prematurity Research Center at Stanford University School of Medicine to understand why 450,000 U.S. babies are being born too early each year. Figuring out what causes preterm birth is the first step in understanding how to prevent it, but in many cases, physicians have no idea why a pregnant woman went into labor early.

The new study focused on preterm deliveries of unknown cause, starting from a database of nearly every California birth between January 2007 and December 2009 to examine singleton pregnancies where the mother did not have any illnesses known to be associated with prematurity.

The researchers found a link between mom’s obesity and the earliest premature births, those that happen before 28 weeks, or about six months, of pregnancy. The obesity-prematurity connection was  stronger for first-time moms than for women having their second or later child. Maternal obesity was not linked with preterm deliveries that happen between 28 and 37 weeks of the 40-week gestation period.

From our press release about the research:

“Until now, people have been thinking about preterm birth as one condition, simply by defining it as any birth that happens at least three weeks early,” said Gary Shaw, DrPH, professor of pediatrics and the lead author of the new research. “But it’s not as simple as that. Preterm birth is not one construct; gestational age matters.”

The researchers plan to investigate which aspects of obesity might trigger very early labor. For example, Shaw said, the inflammatory state seen in the body in obesity might be a factor, though more work is needed to confirm this.

Previously: How Stanford researchers are working to understand the complexities of preterm birth, A look at the world’s smallest preterm babies and New research center aims to understand premature birth
Photo by Evelyn

Big data, Global Health, Infectious Disease, Videos

Discussing the importance of harnessing big data for global-health solutions

Discussing the importance of harnessing big data for global-health solutions

The 2014 Big Data in Biomedicine conference was held here last month, and interviews with keynote speakers, panelists, moderators and attendees are now available on the Stanford Medicine YouTube channel. To continue the discussion of how big data can be harnessed to benefit human health, we’ll be featuring a selection of the videos this month on Scope.

At this year’s Big Data in Biomedicine conference, Michele Barry, MD, FACP, senior associate dean and director of the Center for Innovation in Global Health at Stanford, moderated a panel on infectious diseases. During the discussion, she raised the point that the lines between infectious disease and non-communicable disease are becoming increasingly blurred.

In the above video, Barry expands on this point and offers her point of view on the role big data can play in advancing global health solutions. “Big Data is clearly important these days to get a larger picture of population health,” say says. “What I’m concerned about, and would love to see happen, is for big data surveillance to happen in developing countries and under-served areas, particularly in Sub-Saharan Africa.” Watch Barry’s interview to understand how harnessing big data to improve preventative care for large populations could benefit all of us.

Previously: Stanford statistician Chiara Sabatti on teaching students to “ride the big data wave”, Using Google Glass to help individuals with autism better understand social cues, Rising to the challenge of harnessing big data to benefit patients and U.S. Chief Technology Officer kicks off Big Data in Biomedicine

Big data, Public Health, Research, Stanford News, Videos

Videos of Big Data in Biomedicine keynotes and panel discussions now available online

Videos of Big Data in Biomedicine keynotes and panel discussions now available online

video platform video management video solutionsvideo player

Computational processing power and interconnectedness are causing massive, ongoing advances in biomedical research and health care. But, as discussed at the Big Data in Biomedicine conference, large-scale data analysis also holds the potential to be even more disruptive and transform how we diagnose, treat and prevent disease.

Those who weren’t able to attend the event or watch the webcast, as well as others who may want to review the presentations a second time, can now watch videos of a selection of the keynote speeches and panel discussions on the conference website.

Among the videos available is a talk by David Glazer, director of engineering at Google, about how the company is working to foster collaboration among biomedical researchers that need to analyze vast amounts of data and those with the technological tools to do so. In another talk, Taha Kass-Hout, MD, chief health informatics officer at the Food and Drug Administration, outlined the importance of big data to the federal agency’s mission “to protect and promote the public health” and in promoting information-sharing with transparency and protection of privacy. The video above – the final keynote from Vinod Khosla, MBA, founder of Khosla Ventures and a co-founder of Sun Microsystems – is a must watch. The legendary venture capitalist sparked debate when he shared his perspective that “technology will replace 80 to 90 percent of doctors’ role in the decision-making process.”

Previously: Stanford statistician Chiara Sabatti on teaching students to “ride the big data wave”, Using Google Glass to help individuals with autism better understand social cues, Rising to the challenge of harnessing big data to benefit patients and Discussing access and transparency of big data in government.

Big data, Genetics, Stanford News, Technology, Videos

Ann Wojcicki discusses personalized medicine: “In the next 10 years everyone will have their genome”

Ann Wojcicki discusses personalized medicine: "In the next 10 years everyone will have their genome"

The 2014 Big Data in Biomedicine conference was held here last month, and keynote speakers, panelists, moderators and attendees are now available on the Stanford Medicine YouTube channel. To continue the discussion of how big data can be harnessed to benefit human health, we’ll be featuring a selection of the videos this month on Scope.

Ann Wojcicki, CEO and co-founder of personal-genetics company 23andMe, delivered a keynote speech at Big Data in Biomedicine in 2013 about empowering patients and the importance of owning one’s genetic data. Returning this year to the conference as an attendee, Wojcicki spoke in a Behind the Scenes at Big Data interview about, among other things, her early interest in genes, her belief that genetics are an important part of preventative care, and her desire for a framework where patient communities can easily participate, and potentially direct, medical research. She also discussed the status of 23andMe in the U.S. Food and Drug Administration authorization process and sounded a hopeful note about patients’ future access to their genetic information. “I believe that in the next 10 years everyone will have their genome,” she said.

Previously: When it comes to your genetic data, 23andMe’s Anne Wojcicki says: Just own it

Big data, Genetics, Research, Stanford News, Videos

Stanford statistician Chiara Sabatti on teaching students to “ride the big data wave”

Stanford statistician Chiara Sabatti on teaching students to "ride the big data wave"

The 2014 Big Data in Biomedicine conference was held here last month, and keynote speakers, panelists, moderators and attendees are now available on the Stanford Medicine YouTube channel. To continue the discussion of how big data can be harnessed to benefit human health, we’ll be featuring a selection of the videos this month on Scope.

During the Big Data in Biomedicine conference, Chiara Sabatti, PhD, an associate professor of health research and policy at Stanford, moderated a panel on statistics and machine learning. In the above video, Sabatti highlights a Stanford undergrad course titled “Riding the Big Data Wave” (she calls it a gentle introduction to statistics) and discusses how students in the class are exploring data sets available on the Internet and what can be learned from them. She also references her work building statistical methods that enable researchers to understand the content in these data sets, and her research examining how the genome influences human phenotypes, or observational characteristics such as height, weight and cholesterol levels.

Previously: Rising to the challenge of harnessing big data to benefit patients, Discussing access and transparency of big data in government and U.S. Chief Technology Officer kicks off Big Data in Biomedicine

Autism, Big data, Stanford News, Technology, Videos

Using Google Glass to help individuals with autism better understand social cues

Using Google Glass to help individuals with autism better understand social cues

The 2014 Big Data in Biomedicine conference was held here last month, and keynote speakers, panelists, moderators and attendees are now available on the Stanford Medicine YouTube channel. To continue the discussion of how big data can be harnessed to benefit human health, we’ll be featuring a selection of the videos this month on Scope.

At the Big Data in Biomedicine 2014 conference, Dennis Wall, PhD, associate professor of pediatrics in systems medicine at Stanford, discussed how he and colleagues are leveraging home videos and a seven-point parent questionnaire to diagnose autism. In a pair of Behind the Scenes at Big Data videos, Wall discusses the research and its potential to speed up the standard diagnosis process, as well as another project aimed at using Google Glass to help autistic individuals better read others’ emotions. Watch the above clip to learn how the wearable technology could be used for a new type of behavioral therapy.

Previously: Rising to the challenge of harnessing big data to benefit patients and Home videos could help diagnose autism, says new Stanford study

Big data, Stanford News, Technology

What computation tells us about how our bodies work

What computation tells us about how our bodies work

Last week, as the 2014 Big Data in Biomedicine conference came to a close, a related story about the importance of computing across disciplines posted on the Stanford University homepage. The article describes research making use of the new Stanford Research Computing Center, or SRCC (which we blogged about here). We’re now running excerpts from that piece about the role computation, as well as big data, plays in medical advances.

cup of coffeeAs you sip your morning cup of coffee, the caffeine makes its way to your cells, slots into a receptor site on the cells’ surface, and triggers a series of reactions that jolt you awake. A similar process takes place when Zantac provides relief for stomach ulcers, or when chemical signals produced in the brain travel cell-to-cell through your nervous system to your heart, telling it to beat.

In each of these instances, a drug or natural chemical is activating a cell’s G-protein coupled receptor (GPCR), the cellular target of roughly half of all known drugs, says Vijay Pande, PhD, a professor of chemistry and, by courtesy, of structural biology and computer science at Stanford. This exchange is a complex one, though. In order for caffeine or any other molecule to influence a cell, it must fit snuggly into the receptor site, which consists of 4,000 atoms and transforms between an active and inactive configuration. Current imaging technologies are unable to view that transformation, so Pande has been simulating it using his Folding@Home distributed computer network.

So far, Pande’s group has demonstrated a few hundred microseconds of the receptor’s transformation. Although that’s an extraordinarily long chunk of time compared to similar techniques, Pande is looking forward to accessing the SRCC to investigate the basic biophysics of GCPR and other proteins. Greater computing power, he says, will allow his team to simulate larger molecules in greater detail, simulate folding sequences for longer periods of time, and visualize multiple molecules as they interact. It might even lead to atom-level simulations of processes at the scale of an entire cell. All of this knowledge could be applied to computationally design novel drugs and therapies.

“Having more computer power can dramatically change every aspect of what we can do in my lab,” says Pande, who is also a Stanford Bio-X affiliate. “Much like having more powerful rockets could radically change NASA, access to greater computing power will let us go way beyond where we can go routinely today.

Previously: Computing our evolution, Learning how to learn to readPersonal molecular profiling detects diseases earlier, New computing center at Stanford supports big data and Nobel winner Michael Levitt’s work animates biological processes
Photo by Toshiyuki IMIA

Big data, Genetics, Stanford News, Technology

Computing our evolution

Computing our evolution

Last week, as the 2014 Big Data in Biomedicine conference came to a close, a related story about the importance of computing across disciplines posted on the Stanford University homepage. The article describes research making use of the new Stanford Research Computing Center, or SRCC (which we blogged about here). We’re now running excerpts from that piece about the role computation, as well as big data, plays in medical advances.

The human genome is essentially a gigantic data set. Deep within each person’s 6 billion data points are minute variations that tell the story of human evolution, and provide clues to how scientists can combat modern-day diseases.

To better understand the causes and consequences of these genetic variations, Jonathan Pritchard, PhD, a professor of genetics and of biology, writes computer programs that can investigate those linkages. “Genetic variation effects how cells work, both in healthy variation and in response to disease, which ultimately regulates organism-level phenotypes,” Pritchard says. “How natural selection acts on phenotypes, that’s what causes evolutionary changes.”

Consider, for example, variation in the gene that codes for lactase, an enzyme that allows mammals to digest milk. Most animals don’t express lactase after they’ve been weaned from their mother’s milk. In populations that have historically revolved around dairy farming, however, Pritchard’s algorithms have shown that there has been strong long-term selection for expressing the genes that allow people to process milk. There has been similarly strong selection on skin pigmentation in non-Africans that allow better synthesis of vitamin D in regions where people are exposed to less sunlight.

The methods used in these types of investigations have the potential to yield powerful medical insights. Studying variations in gene regulation within a population could reveal how and where particular proteins bind to DNA, or which genes are expressed in different cell types – information that could be applied to design novel therapies. These inquiries can generate hundreds of thousands of data sets, which can only be parsed with clever algorithms and machine learning.

Pritchard, who is also a Stanford Bio-X affiliate, is bracing for an even bigger explosion of data; as genome sequencing technologies become less expensive, he expects the number of individual genomes to jump by as much as a hundredfold in the next few years. “There are not a lot of problems that we’re fundamentally unable to handle with computers, but dealing with all of the data and getting results back quickly is a rate limiting step,” Pritchard says. “Having access to SRCC will make our inquiries go easier and more quickly, and we can move on faster to making the next discovery.”

Previously: Learning how to learn to readPersonal molecular profiling detects diseases earlier and New computing center at Stanford supports big data

Stanford Medicine Resources: