Published by
Stanford Medicine

Category

Science Policy

Ethics, In the News, Medicine and Society, Science, Science Policy, Sports, Stanford News

Stanford expert celebrates decision stopping testosterone testing in women’s sports

Stanford expert celebrates decision stopping testosterone testing in women's sports

Female track and field athletes no longer need to have their natural testosterone levels below a certain threshold to compete in international events, the so-called “Supreme Court of sports”, the Court of Arbitration for Sport, ruled Monday.

Katrina Karkazis, PhD, a Stanford senior research scholar who was closely involved with the case, got the news on Friday, while she was in a San Francisco dog park. “What a day!” she said. “I was madly refreshing my email — I thought we were going to lose… I just started screaming and crying.”

Karkazis, who is an expert on ethics in sports and also gender, said she spent a year of her life working on the case.

She served as an advisor to 19-year-old sprinter Dutee Chand, who challenged the regulation that female athletes must have certain testosterone levels or undergo medical interventions to lower their testosterone to be allowed to compete against women in events governed by the International Association of Athletics Federations (IAAF), the international regulatory body of track and field.

The ruling suspends the IAAF’s testing regimen for two years, but Karkazis expects the decision will lead to permanent changes in women’s sports, including a reevalution by the International Olympic Committee.

“I’m thrilled,” Karkazis said. She said she was also surprised. “I didn’t think it was our time. I thought there were still too many entrenched ideas about testosterone being a ‘male hormone’ and it not belonging in women.”

Karkazis gained international attention after penning an op-ed in The New York Times in 2012 when the IAAF and the International Olympic Committee crafted a new policy banning women with naturally high levels of testosterone from competing.

“You can’t test for sex,” Karkazis said. “It’s impossible. There’s no one trait you can look at to classify people. There are many traits and there are always exceptions.”

She said that now women who have lived and competed their entire lives as women will be eligible to compete, a default policy she believes is sufficient to ensure a level playing field.

Previously: “Drastic, unnecessary and irreversible medical interventions” imposed upon some female athletes, Arguing against sex testing in athletes and Is the International Olympic Committee’s policy governing sex verification fair?
Photo by William Warby

Big data, Ethics, Genetics, Science Policy, Stanford News

Stanford panel: Big issues will loom when everyone has their genomic sequence on a thumb drive

Stanford panel: Big issues will loom when everyone has their genomic sequence on a thumb drive

When I was a biology grad student in the early 1980s, we used to joke about people who were getting their PhDs by spending six long years sequencing a single gene. They worked around the clock seven days a week – and seven nights, too, sleeping on their lab benches when they slept at all.

A few years later the Human Genome Project came along and sped things up quite a bit. But it still took 13 years and a billion dollars to fully sequence a single human genome.

It’s a different story now. With a one-day, $1,000 genome sequence in sight, a 20-minute, $100 sequence can’t be far off. It appears that within 15 years or so, the average individual’s genomic sequence will be just another lengthy, standard supplemental addition to that person’s electronic medical record.

That raises a lot of questions. Last Saturday, I had the great privilege of asking a few of them to a panel of three tier-one Stanford experts: Mildred Cho, PhD, associate director of the Stanford Center for Biomedical Ethics; Hank Greely, JD, director of the Center for Law and the Biosciences, and Mike Snyder, MD, PhD, chair of Stanford’s genetics department and director of the Center for Genomics and Personalized Medicine. (I was the moderator.)

The panel, titled “Genetic Privacy: The Right (Not) to Know,” was a lively one, part of a day-long Alumni Day event sponsored by the Stanford Medical Center Alumni Association. (Here’s a link to the video above). Cho, Greely and Snyder have their own well-developed perspectives and policy preferences on the utility of mass genomic-sequence availability, and they articulated those views with passion and aplomb.

The 300 people in the audience, most of them doctors, had plenty of questions of their own. Several were ones I’d hoped to ask but hadn’t had time.

By the time I walked away from this consciousness-raising clash of perspectives, newly aware of just how fast the future is coming at us, I had another question: Once everyone has the equivalent of a thumb drive with their complete genome on it, can you imagine a kind of online matchmaking service in which you upload your genome to a server, which then picks out a date or a mate for you? The selection is guided by what you say you’re looking for: short-term mutual attraction, an enduring monogamous relationship, robust offspring … Is that now thinkable?

Continue Reading »

AHCJ15, Science, Science Policy, Stem Cells

Stanford stem cell experts highlight “inherent flaw” in drug development system

Stanford stem cell experts highlight "inherent flaw" in drug development system

Academic institutions are in a much better position than pharmaceutical companies to make the best decisions about which therapies deserve further development. That was the underlying message from a pair of Stanford researchers at a panel on stem cell science at last weekend’s Association of Health Care Journalism 2015 conference.

“There’s an inherent flaw in our system,” said Irving Weissman, MD, director of the Stanford Institute for Stem Cell Biology and Regenerative Medicine. “Companies are driven by the desire for profits rather than the desire to find the best therapy, and they often give up on discoveries too early.”

Weissman cited studies that were done long ago at Stanford and proven in mouse models or human clinical trials that pharmaceutical companies have failed to develop. “In mice, transplantation of purified blood stem cell and insulin producing cells from closely related mice leads to a permanent cure,” Weissman says. “We discovered that 16 years ago, and a therapy is still not available.”

A therapy involving high-dose chemotherapy followed by purified stem cell transplant for stage 4 breast cancer cured a relatively high number of women in a small trial almost 20 years ago but the pharmaceutical company with the rights to the technology decided not to develop the treatment, Weissman says. A larger trial of this therapy is currently being planned at Stanford.

Maria Grazia Roncarolo, MD, co-director of the institute, spoke about her own experience in an academic environment developing therapies for diseases that pharmaceutical companies deem to rare to merit their attention. Only after she showed that a therapy for severe combined immune deficiency could work did pharmaceutical companies get interested.

“Academic researchers should have the ability to test a therapy, to have control of the design and execution of the clinical trials, and pharmaceutical companies should do the production and marketing,” Roncarolo told the journalists attending the session.

Allowing academic institutions to run clinical trials is “a big effort that will require a team, institutional commitment and robust funding,” Roncarolo said. Comparing the situation in the United States to that in Europe, where she has done much of her research, she notes that “in this country there is little funding for proof of concept trials to bring therapies from the lab bench to the bedside.”

Previously: An inside look at drug development, Stanford’s Irving Weissman on the (lost?) promise of stem cells and The largest stem cell research building in the U.S.

FDA, In the News, otolaryngology, Public Health, Science Policy

With e-cigarettes, tobacco isn’t the only danger

With e-cigarettes, tobacco isn't the only danger

16546091210_99279fd110_zE-cigarettes are far from safe, Robert Jackler, MD, writes in a strongly worded op-ed that appeared over the weekend in the San Jose Mercury News.

Most at risk are teens and tweens, enticed by flavors ranging from cotton candy, gummy bear and root beer to peanut butter cookie. (An aside: Can you imagine a hardened, pack-a-day smoker deciding to curb his or her harmful habit by switching to cotton candy-flavored e-cigarettes?)

Interestingly, these flavors, which are thought to be safe in foods may be upping the harmfulness of the e-cigs; after all, the lungs process chemicals much differently than the stomach does. (Remember popcorn lung? In the early 2000’s, a group of workers at a microwave popcorn plant fell ill after inhaling too much of the flavoring agent, diacetyl, used to give popcorn its buttery taste.)

From Jackler’s piece:

In 2009, to reduce youth smoking, the FDA banned flavors (other than menthol) from traditional cigarettes. Fearing regulatory action, the e-cigarette industry response has echoed the playbook of the tobacco industry. One brand went so far as to commission a study which, not surprisingly, arrived at the improbable finding that flavors do not appeal to the young. The industry argues that flavored e-cigarettes should be allowed because they have not been proven unsafe.

Jackler goes on to warn of the progressive nature of lung damage — e-cigarette smokers may be accumulating harm well before they notice a problem.

Previously: Raising the age for tobacco access would benefit health, says new Institute of Medicine report, How e-cigarettes are sparking a new wave of tobacco marketing, E-cigarettes and the FDA: A conversation with a tobacco-marketing researcher and E-Cigarettes: The explosion of vaping is about to be regulated
Photo by Jonny Williams

Events, Science, Science Policy, Stanford News, Technology

The challenge – and opportunity – of regulating new ideas in science and technology

The challenge – and opportunity – of regulating new ideas in science and technology

running image

Innovation in science and technology holds promise to improve our lives. But disruptive business models, do-it-yourself medical devices, and open platforms also introduce corporate and personal risks. How can the public stay safe from unknown consequences as a company’s product or service matures? In a recent panel co-sponsored by Stanford’s Rock Center for Corporate Governance and Center for Law and the Biosciences, experts in law, business, and ethics discussed what happens when science and technology outrun the law.

Talk of drones, app-based car services, and music-sharing technologies teased out key issues currently disrupting legal paradigms. But biomedical science took center stage. “Health is more regulated than any other [area]” said panelist Hank Greely, JD, the Deane F. and Kate Edelman Johnson Professor of Law and director of the Center for Law and the Biosciences. He characterized the FDA’s processes as useful in slowing innovation in the health space but noted that rigorous pre-market regulation “won’t work in most parts of the economy.”

What happens when regulation is beyond reach? Greely noted that even if the FDA could limit an entrepreneurial company, it couldn’t conquer the DIY market. He referenced a procedure known as transcranial direct current stimulation, which, by applying electrodes to the head, can feel like “Adderall through a wire” or alter a person’s mood according to placement. A transmitting device is so simple to make, Greely said, “the hardest part will be finding an open Radio Shack.”

Moderator Dan Siciliano, JD, faculty director of the Rock Center and professor of the practice of law, asked the panelists which under-regulated technologies they found frightening. Vapor cigarettes, answered Eleanor Lacey, JD, for luring youth through fruit flavors and targeting them through advertising channels prohibited for regular cigarettes. (As previously reported on Scope, the FDA announced last spring that it would regulate the sale, but not marketing, of e-cigarettes.)

Lacey, vice president, general counsel and secretary of SurveyMonkey, discussed regulation issues involving health information that is transmitted on the company’s platform, where users own their data. She pointed to instances of users creating surveys on which respondents shared HIPAA-protected information, admitted suicidal thoughts, or confessed to crimes. The company cooperates with law enforcement in a very narrow set of sensitive situations but also upholds neutrality of the user-owned space and the user right to control the content: “You don’t want us to be able to shut it down,” Lacey said.

Continue Reading »

In the News, Medical Education, Research, Science, Science Policy, Stanford News

A conversation with John Ioannidis, “the superhero poised to save” medical research

A conversation with John Ioannidis, "the superhero poised to save" medical research

ioannidis at deskI always relish a good Q&A. As a writer, I know how hard it is to craft questions that elicit insights into a person — or his or her work. That’s why I jumped at the opportunity to spotlight a recently published Vox interview with John Ioannidis, MD, DSc, director of the Meta-Research Innovation Center at Stanford.

Ioannidis is blunt, and prolific, with his criticisms of science.

Among his concerns: Researchers usually publish only results that show statistical significance, failing to share numerous experiments that didn’t work out, which would also be illustrative. Many studies aren’t reproducible: Sometimes due to a lack of data, other times just due to shoddy procedures. Researchers “spin” data to please their funders.  And in universities, scientists are compelled to publish, a system that favors quantity over quality. Peer review has gaps. And the list goes on and on.

What, then, to do?

Here’s Ioannidis (referred to by the writer as “the superhero poised to save” medical research) in the Q&A:

Maybe what we need is to change the incentive and reward system in a way that would reward the best methods and practices. Currently, we reward the wrong things: people who submit grant proposals and publish papers that make extravagant claims. That’s not what science is about. If we align our incentive and rewards in a way that gives credibility to good methods and science, maybe this is the way to make progress.

One problem is education, he says:

Most scientists in biomedicine and other fields are mostly studying subject matter topics; they learn about the subject matter rather than methods. I think that several institutions are slowly recognizing the need to shift back to methods and how to make a scientist better equipped in study design, understanding biases, in realizing the machinery of research rather than the technical machinery.

There’s much more in the piece, including a glimpse of Ioannidis’ “love numbers” system.

Previously: Shake up research rewards to improve accuracy, says Stanford’s John Ioannidis, John Ioannidis discusses the popularity of his paper examining the reliability of scientific research and “U.S. effect” leads to publication of biased research, says Stanford’s John Ioannidis 
Photo, which originally appeared in STANFORD Magazine, by Robyn Tworney

Cancer, Chronic Disease, Clinical Trials, Science Policy

A look at crowdfunding clinical trials

A look at crowdfunding clinical trials

1024px-Assorted_United_States_coins I’ve been able to watch the crowdfunding phenomenon up close: My husband is a Kickstarter addict, and he, like millions of others, funds projects that speak to his passions and social priorities. In recent years, some non-profits have applied the crowdfunding model to clinical trials (something he hasn’t funded yet), and others may follow suit as federal-funding dollars dries up. Last week, Nature Medicine published an article that describes the first few years of those efforts and the questions they bring up.

As outlined in the piece, critics argue that the system unfairly penalizes those that may not have a large online social network to use to publicize their funding efforts, while proponents say it makes it possible for donors to connect more directly with the research and it increases transparency of research funding. As one source explains:

“One key thing is tangibility,” says Catherine Ferguson, Innovation Project Lead at Cancer Research UK, “It’s an inherent part of crowdfunding that isn’t inherent in regular funding.” Whether it’s a particular type of cancer or a particular therapy, crowdfunding allows for a “more direct relationship with both the researcher and the research,” she adds, emphasizing that this directed approach is good for maintaining relationships with donors.

Cancer Research UK, which we’ve written about before, was one of the early advocates of clinical trial crowdfunding. It recently concluded it first effort to crowdfund a clinical trial to study a vaccine for Epstein-Barr virus in cancer patients. The group fell far short of their goal, raising only six percent of the £40,000 ($61,000) goal on their Indiegogo campaign, so it returned the funds to donors. Again, from the article:

The organization chose a so-called fixed-funding model, in which they chose a goal amount but kept none of the funds that were raised if the goal wasn’t met. “It felt disingenuous to keep some of the money but not make the research happen,” said Ferguson. “We really wanted to emphasize that the money was for a specific project and if the project couldn’t be fully funded, then why keep the money?” Because the campaign wasn’t successful, the funds raised were returned to those who pledged the money, but Ferguson said that many of the donors reached out to make contributions to the organizations anyway.

Other organizations are using slightly different models, and the coming months, or maybe years, will reveal whether any are able to successfully fund clinical trials through this new avenue.

Previously: New crowdfunding sites apply Kickstarter model to health and medicineCan crowdfunding boost public support and financing for scientific research? and Crowdsourcing the identification of cancer cells
Photo by Elembis

Global Health, In the News, Public Health, Research, Science Policy

Gates Foundation makes bold moves toward open access publication of grantee research

Gates Foundation makes bold moves toward open access publication of grantee research

Bill and Melinda GatesLast week, the Gates Foundation announced that it will now require all grantees to make the results of their research publicly accessible immediately. Researchers will only be able to publish their research in scientific journals that make the published papers accessible via open access – which rules out publishing in many prominent journals such as Science and Nature.

Inside Higher Education detailed the new policy:

The sweeping open access policy, which signals the foundation’s full-throated approval for the public availability of research, will go into effect Jan. 1, 2015, and cover all new projects made possible with funding from the foundation. The foundation will ease grant recipients into the policy, allowing them to embargo their work for 12 months, but come 2017, “All publications shall be available immediately upon their publication, without any embargo period.”

“We believe that our new open access policy is very much in alignment with the open access movement which has gained momentum in recent years, championed by the NIH, PLoS, Research Councils UK, Wellcome Trust, the U.S. government and most recently the WHO,” a spokeswoman for the foundation said in an email. “The publishing world is changing rapidly as well, with many prestigious peer-reviewed journals adopting services to support open access. We believe that now is the right time to join the leading funding institutions by requiring the open access publication of our funded research.”

But the Gates Foundation policy goes further than other funding instutions. Once the papers are available publicly, they must be licensed so that others can use that data freely, even for commercial purposes. A news article in Nature explains the change:

The Gates Foundation’s policy has a second, more onerous twist which appears to put it directly in conflict with many non-OA journals now, rather than in 2017. Once made open, papers must be published under a license that legally allows unrestricted re-use — including for commercial purposes. This might include ‘mining’ the text with computer software to draw conclusions and mix it with other work, distributing translations of the text, or selling republished versions.  In the parlance of Creative Commons, a non-profit organization based in Mountain View, California, this is the CC-BY licence (where BY indicates that credit must be given to the author of the original work).

This demand goes further than any other funding agency has dared. The UK’s Wellcome Trust, for example, demands a CC-BY license when it is paying for a paper’s publication — but does not require it for the archived version of a manuscript published in a paywalled journal. Indeed, many researchers actively dislike the thought of allowing such liberal re-use of their work, surveys have suggested. But Gates Foundation spokeswoman Amy Enright says that “author-archived articles (even those made available after a 12-month delay) will need to be available after the 12 month period on terms and conditions equivalent to those in a CC-BY license.”

The Gates Foundation has funded approximately $32 billion in research since its inception in 2000 and funds about $900 million in global health funds annually. That’s a smaller impact than, say the U.S. National Institutes of Health, which funds about $30 billion in health research. But it does represent nearly 3,000 papers published in 2012 and 2013. Only 30 percent of those were published in open access journals.

Previously: Teen cancer researcher Jack Andraka discusses open access in science, stagnation in medicineExploring the “dark side of open access”, White House to highlight Stanford professors as “Champions of Change”Stanford neurosurgeon launches new open-source medical journal built on a crowdsourcing modelDiscussing the benefits of open access in science and How open access publishing benefits patients
Photo of Bill and Melinda Gates by Kjetil Ree

NIH, Research, Science Policy, Stanford News

Shake up research rewards to improve accuracy, says Stanford's John Ioannidis

Shake up research rewards to improve accuracy, says Stanford's John Ioannidis

currencyLab animals such as mice and rats can be trained to press a particular lever or to exhibit a certain behavior to get a coveted food treat. Ironically the research scientists who carefully record the animals’ behavior really aren’t all that different. Like mice in a maze, researchers in this country are rewarded for specific achievements, such as authoring highly cited papers in big name journals or overseeing large labs pursuing multiple projects. These rewards come in the form of promotions, government grants and prestige among a researcher’s peers.

Unfortunately, the achievements do little to ensure that the resulting research findings are accurate. Stanford study-design expert John Ioannidis, MD, DSci, has repeatedly pointed out serious flaws in much published research (in 2005 he published what was to be one of the most highly-accessed and most highly-cited papers ever in the biomedical field “Why most published research findings are false”).”

Today, Ioannidis published another paper in PLoS Medicine titled “How to make more published research true.” He explores many topics that could be addressed to improve the reproducibility and accuracy of research. But the section that I found most interesting was one in which he argues for innovative, perhaps even disruptive changes to the scientific reward system. He writes:

 The current system does not reward replication—it often even penalizes people who want to rigorously replicate previous work, and it pushes investigators to claim that their work is highly novel and significant. Sharing (data, protocols, analysis codes, etc.) is not incentivized or requested, with some notable exceptions. With lack of supportive resources and with competition (‘‘competitors will steal my data, my ideas, and eventually my funding”) sharing becomes even disincentivized. Other aspects of scientific citizenship, such as high-quality peer review, are not valued.

Instead he proposes a system in which simply publishing a paper has no merit unless the study’s findings are subsequently replicated by other groups. If the results of the paper are successfully translated into clinical applications that benefit patients, additional “currency” units would be awarded. (In the example of the mice in the maze, the currency would be given in the form of yummy food pellets. For researchers, it would be the tangible and intangible benefits accrued by those considered to be successful researchers). In contrast, the publication of a paper that was subsequently refuted or retracted would result in a reduction of currency units for the authors. Peer review and contributions to the training and education of others would also be rewarded.

The concept is really intriguing, and some ideas would really turn the research enterprise in this country on its head. What if a researcher were penalized (fewer pellets for you!) for achieving an administrative position of power… UNLESS he or she also increased the flow of reliable, reproducible research? As described in the manuscript:

[In this case] obtaining grants, awards, or other powers are considered negatively unless one delivers more good-quality science in proportion. Resources and power are seen as opportunities, and researchers need to match their output to the opportunities that they have been offered—the more opportunities, the more the expected (replicated and, hopefully, even translated) output. Academic ranks have no value in this model and may even be eliminated: researchers simply have to maintain a non-negative balance of output versus opportunities. In this deliberately provocative scenario, investigators would be loath to obtain grants or become powerful (in the current sense), because this would be seen as a burden. The potential side effects might be to discourage ambitious grant applications and leadership.

Ioannidis, who co-directs with Steven Goodman, MD, MHS, PhD, the new  Meta-Research Innovation Center at Stanford, or METRICS, is quick to acknowledge that these types of changes would take time, and that the side effects of at least some of them would likely make them impractical or even harmful to the research process. But, he argues, this type of radical thinking might be just what’s needed to shake up the status quo and allow new, useful ideas to rise to the surface.

Previously: Scientists preferentially cite successful studies, new research shows, Re-analyses of clinical trial results rare, but necessary, say Stanford researchers  and John Ioannidis discusses the popularity of his paper examining the reliability of scientific research
Photo by Images Money

Big data, Bioengineering, NIH, Research, Science Policy, Stanford News

$23 million in NIH grants to Stanford for two new big-data-crunching biomedical centers

$23 million in NIH grants to Stanford for two new big-data-crunching biomedical centers

More than $23 million in grants from the National Institutes of Health – courtesy of the NIH’s Big Data to Knowledge (BD2K) initiative – have launched two Stanford-housed centers of excellence bent on enhancing scientists’ capacity to compare, contrast and combine study results in order to draw more accurate conclusions, develop superior medical therapies and understand human behaviors.

Huge volumes of biomedical data – some of it from carefully controlled laboratory studies, increasing amounts of it in the form of electronic health records, and a building torrent of data from wearable sensors – languish in isolated locations and, even when researchers can get their hands on them, are about as comparable as oranges and orangutans. These gigantic banks of data, all too often, go unused or at least underused.

But maybe not for long. “The proliferation of devices monitoring human activity, including mobile phones and an ever-growing array of wearable sensors, is generating unprecedented quantities of data describing human movement, behaviors and health,” says movement-disorders expert Scott Delp, PhD, director of the new National Center for Mobility Data Integration to Insight, also known as the Mobilize Center. “With the insights gained from subjecting these massive amounts of data to  state-of-the-art analytical techniques, we hope to enhance mobility across a broad segment of the population,” Delp told me.

Directing the second grant recipient, the Center for Expanded Data and Retrieval (or CEDAR), is Stanford’s Mark Musen, MD, PhD, a world-class biomedical-computation authority. As I wrote in an online story:

[CEDAR] will address the need to standardize descriptions of diverse biomedical laboratory studies and create metadata templates for detailing the content and context of those studies. Metadata consists of descriptions of how, when and by whom a particular set of data was collected; what the study was about; how the data are formatted; and what previous or subsequent studies along similar lines have been undertaken.

The ultimate goal is to concoct a way to translate the banter of oranges and orangutans, artichokes and aardvarks now residing in a global zoo (or is it a garden?) of diverse databases into one big happy family speaking the same universal language, for the benefit of all.

Previously: NIH associate director for data science on the importance of “data to the biomedicine enterprise”, Miniature wireless device aids pain studies and Stanford bioengineers aim to better understand, treat movement disorders

Stanford Medicine Resources: