Have you ever heard the phrase, “That’s so meta!” and wondered what it meant? In a nutshell, it refers to ideas or moments that are self-referential and circular in nature. It’s Julia Roberts playing a character who is forced to pretend to be … Julia Roberts. (Ocean’s Twelve!) It’s Kareem Abdul-Jabbar playing Kareem pretending to be airline pilot Roger Murdock. (Airplane!)
At Stanford University’s METRICS center, they do meta-research. As the name implies, they research the research. Meta-researchers apply a wide variety of methodologies to study how research is done and interpreted, in order to reach a rigorous understanding of what makes research reliable, and how it can be strengthened.
We here at the Center for Open Science happen to be committed to the same goal. Our own Tim Errington, senior director of research, spoke to the meta-research community at a METRICS international forum seminar recently, explaining how our earlier metascience work impacted the field of cancer biology research, through “The Reproducibility Project: Cancer Biology.”
Launched in 2013, this Reproducibility Project was an eight-year effort to replicate experiments from high-impact cancer biology papers published between 2010 and 2012 in high-profile journals such as Nature, Science and Cell.
The project was a collaboration between the Center of Open Science and Science Exchange. (All papers published as part of this project are available in a collection at eLife and all replication data, code, and digital materials for the project are available in a collection on OSF.)
As Errington highlighted in his METRICS presentation, the project ran into a number of obstacles. For instance, only 2% of the 193 experiments we tried to replicate had open data available, and 32% of the original authors were either not helpful or unresponsive to our requests.
Oftentimes, the original researchers had left their institutions and the data was lost or destroyed.
Ultimately, only 50 replication experiments were completed.
Upon replication, we found that effect sizes were on average 85% smaller than the original findings.
More than half the time, once the experiment was replicated, effects failed to materialize on more criteria than they succeeded. An original positive effect was replicated 40% of the time while original null results were replicated 80% of the time.
Collectively, this evidence suggests there are opportunities to improve the transparency, sharing, and rigor of preclinical research – and that these open practices will advance the pace of discovery.
“These articles – they’re just advertisements, they’re a postcard for what’s really going on,” Errington said. “We need the data, the materials, the protocols – that’s what we need to share, and I think this project’s challenges highlight [that]. It’s great that we have these publications, but we have to do a lot more.”
That means using repositories for sharing data, code, reagents, protocols, reports, and preregistrations, he said. That means incentivizing open science practices in research communities, from better training on open science practices to implementing institutional- and journal-policies that reward and insist on transparency.
“It’s a social system; you can’t just ask a researcher to do it, you can’t just ask one actor to do it,” Errington said. “The barriers I had were not [always] due to individuals’ resistance; a lot of times, it was that the system around them wasn’t synced up with what we wanted to do.”
We invite all of our sponsors, partners, and members of the community to learn more about how our organization operates, our impact, our financial performance, and our nonprofit status.
Unless otherwise noted, this site is licensed under a Creative Commons Attribution 4.0 International License.
All images and logos are licensed under Creative Commons Attribution-NoDerivatives 4.0 International License.