Published by
Stanford Medicine

Genetics, In the News, Research, Stanford News

“Omics” studies need validation, says Stanford’s Ioannidis

"Omics" studies need validation, says Stanford's Ioannidis

Validation is always a good thing, whether in our personal or professional lives. It shows we’re on the right track. Even research studies need ways to compare their findings with studies that use similar methods and to confirm their conclusions. But that can be difficult to do when the technology or concepts are very new. Stanford researcher John Ioannidis, MD, DSc, chief of the Stanford Prevention Research Center talks about the problem, and offers some possible solutions, in a perspective (registration required) in today’s Science magazine:

The exponential growth of the “omics” fields (genomics, transcriptomics, proteomics, metabolomics, and others) fuels expectations for a new era of personalized medicine. However, clinically meaningful discoveries are hidden within millions of analyses. Given this immense biological complexity, separating true signals from red herrings is challenging, and validation of proposed discoveries is essential

Ioannidis co-authored the piece, which appears in a special issue of the magazine on data replication and reproducibility, with researcher Miun Khoury, MD, PhD, from the CDC’s Office of Public Health Genomics. The two researchers suggest applying a multi-step way to validate large studies, including assessing the analytic validity, repeatability, replication, external validation, clinical validity and clinical utility of the studies. The authors conclude:

One may argue that this is not easy because of technical and cost considerations. However, similar arguments were made for fields such as human genome epidemiology, which then saw the cost of DNA sequencing decrease over a billionfold over the past 20 years and the amount of information increase proportionally. Costs could decrease for other technologies as technologies attract the attention of many investigators, especially in large consortia, thereby driving data reproducibility in a field. Funding incentives, reproducibility rewards and/or nonreproducibility penalties, and targeted requirements for repeatability checks may enhance the public availability of useful data and valid analyses.


Please read our comments policy before posting

Stanford Medicine Resources: