What funders are doing to support transparent and reproducible research.

A Curated Resource Hub for Research Funders

Funders of scientific research are well-positioned to guide scientific discoveries by enabling and incentivizing the most rigorous and transparent methods. This resource hub provides examples of best practices currently employed by funders of biomedical, educational, and social sciences. These recommendations and templates provide useful tools so that you the funder can learn from others how to best shift norms in the entire research community. 

Data, Materials, and Code

Data, Materials, and Code Transparency

Transparency into the items generated over the course of a study (e.g. the data, questionnaires, stimuli, physical reagents, or analytical code) benefit the entire research community. Consumers of knowledge have more trust in the underlying findings, researchers have more materials to reuse, and the generators of these items have a well curated set of materials to use in future years. Curating these items after the study is completed can be a time-consuming task, so we encourage the use of tools that combine project management and preservation into the same platform, which underlies the design and development of the OSF

Data Citation

Citation of articles is routine. Similar expectations can and should be applied to citation of data, code, and materials to recognize and credit these as original intellectual contributions. Funders can encourage this practice (and by extension incentivize the sharing of these items) by clarifying standards for data, materials, and code citations in applications, grant reports, and papers reporting the results of funded research. 

Reporting Guidelines and Checklists for Design and Analysis Transparency

Standards for reporting research designs and analyses should maximize transparency about the research process and minimize potential for vague or incomplete reporting of the methodology. Such standards are highly discipline-specific, and recommendations can provide guidance on how to identify and use relevant checklists. The use of these checklists should seek to maximize:

  • Clarity into which standards were applied.
  • Precision of reported study protocols and results necessary to replicated or interpret the work.
  • Ease of  compliance with the standard.

Recommendations

Good

  • Require that each submission include a Data Management Plan (DMP).
  • Include DMPs as part of the evaluated grant application.
  • Make DMPs publicly available alongside summary lists of funded projects. 
  • Provide guidance and examples of citing existing datasets.
  • Recommend the use of specific reporting guidelines for grant reports and published outcomes.

Better

  • Require that DMPs include a specific plan for making data publicly available. 
  • For data that cannot be made publicly available, provide guidance for sharing subsets of data and resources for sharing through protected access repositories.
  • Include dedicated funds for data curation and archiving. 
  • Require the use of reporting checklists for grant reports and published outcomes, such as the Transparency Checklist recently published in Nature Human Behavior.

Best

  • All of the above recommendations, plus facilitate independent computational reproducibility of results. Computational reproducibility includes using collected data and generated analytical code to re-run analyses to verify the calculated results. Dozens of journals apply these rigorous standards to their publication workflow and are noted as Level 3 for Data and Code on TOP Factor.

Examples of Funders Promoting Open Practices

Preregistration

Why Preregistration?

Researchers are highly motivated to discover unexpected relationships or effects, but are also looking to ensure that these findings are robust. Unfortunately, the tension between these two motivations lead to (mis)application of statistical tools, whereby trends in a dataset are used to generate a hypothesis, which is then tested by using the same dataset that was used to create it. This and other questionable practices, such as repeatedly trying slightly different analyses, invalidates the most common statistical tools and corrodes trust in scientific findings.

Preregistration makes a clear distinction between hypothesis-testing (confirmation) and hypothesis-generation (discovery or exploration). 

Preregistration is especially important if you want to support research projects whose purpose is to make an inferential claim from a sampled population to some wider population. This can apply to casual inference studies such as randomized, controlled trials (RCTs), but is also relevant to other study designs. Observational studies, epidemiological research, or other methods that assume any a-priori hypothesis can all benefit from preregistration. 

Preregistration may not be needed in purely exploratory research, theory/model development, or purely descriptive research. In these cases, the recommended best practice is transparently documenting workflows for preservation and reuse. This can be done with version controlled repositories such as GitHub, general purpose project tools such as OSF, or any electronic lab notebook (ELN).


Recommendations

Good

  • Make educational materials about preregistration available to your grantees on dedicated pages. Use or link to materials at cos.io/prereg.
  • Showcase grants that include preregistered analyses on a dedicated page or similar venue. 
  • Ask grantees to disclose whether or not research was preregistered in reports and published articles.

Better

  • Make plans to preregister part of the scored evaluation criteria of grant submissions.
  • Support independent verification of preregistered studies to confirm that work is reported completely.

Best

  • All of the above recommendations, plus requiring that any inferential research or hypothesis testing studies include a preregistration.

Examples


Relevant Literature

"The number NHLBI trials reporting positive results declined after the year 2000. Prospective declaration of outcomes in RCTs, and the adoption of transparent reporting standards, as required by clinicaltrials.gov, may have contributed to the trend toward null findings." 

  • Kaplan, R. M., & Irvin, V. L. (2015). Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time. PLoS ONE, 10(8), e0132382. https://doi.org/10/gc8z6n

"Signs of bias from lack of trial protocol registration were found with non-registered trials reporting more beneficial intervention effects than registered ones."

  • Papageorgiou, S. N., Xavier, G. M., Cobourne, M. T., & Eliades, T. (2018). Registered trials report less beneficial treatment effects than unregistered ones: A meta-epidemiological study in orthodontics. Journal of Clinical Epidemiology. https://doi.org/10/gdzqnb

"Strong results are 40 percentage points more likely to be published than are null results and 60 percentage points more likely to be written up. We provide direct evidence of publication bias and identify the stage of research production at which publication bias occurs: Authors do not write up and submit null findings."

  • Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502–1505. https://doi.org/10/f6gnd3

See a curated list of studies on the need for and benefits of other open practices here.

Registered Reports

A Publishing Format to Protect Against Bias

Registered Reports build on the benefits of preregistration by ensuring that preregistered research will be published regardless of outcome. In order to reach that threshold, the preregistered proposal is first peer reviewed by the journal to evaluate the importance of the research questions and the ability of the proposed study to address them. High quality studies are then granted "in-principle acceptance" (IPA) and published whether results are significant or null.

Funders can support this process by creating incentives for researchers to use it or by partnering with journals to combine grant review with publication review, thus creating a more efficient and less biased research system. 


Recommendations

Good

  • Make educational materials about Registered Reports available to your grantees. Use or link to materials at cos.io/rr.
  • Showcase grants that were published as Registered Reports. 
  • Ask grantees to disclose whether or not research would be appropriate to submit as a Registered Report.
  • Allow for more space in grant timelines before data collection starts for stage 1 peer review to occur. Three to six months at the beginning can improve study design and reduce time on data analysis and publication at the end of a project.

Better

  • Require that part of a funded project be submitted as a Registered Report. If the work is not granted IPA by the journal, the research plan should still be preregistered.

Best

  • Establish a Registered Report Journal-Funder partnership on a call for submissions. 

Examples of Journal and Funder Partnerships


Resources

Replications

What is replication, and why should it be supported?

A replication is a study that seeks to recreate a previously-published experiment. For replications, any outcome could be considered evidence that may increase or decrease confidence about a claim from previously-conducted research. Replication is perceived as boring, uncreative, confirmatory work of science. This misperception leads to minimal funding or publishing replication studies, and contributes to the Replication Crisis by devaluing the importance of replications. 

No single study, whether novel or replication, can provide a definitive answer to any claim. Outcomes from replication studies can add to the available information supporting a claim, which can lead to refinement of theory to the generation of new, testable predictions. In this light, replications are central to bolstering the evidence for/against a specific claim as well as the discovery of new, interesting phenomena. 


Recommendations

Good

  • Fund replication studies as part of a pilot or specific solicitation.

Better

  • Fund replication studies as part of the general funding mechanism.
  • Identify specific studies that will be considered for replication.

Best

  • Partner with a journal to ensure the the replication study undergoes peer review before results are known. This prevents biases from creeping in once results are obtained: replications that confirm a claim may be deemed "too boring" to publish, whereas ones that come to different conclusions may face unwarranted post-hoc critique. 

Examples of Funder Open Practices

Collaborations

Incentivize Team Science for Greater Impact

The outdated image of science being conducted by a lone genius still affects how scientists are perceived by the public today. Awards that highlight a single individual's achievement are one example of how this notion persists. Unfortunately, this image is at odds with how science is largely practiced. Funders can support a more collaborative community that recognizes this reality and supports better research.


Recommendations

Good

  • Specify that particular calls or a proportion of all calls for funding be available to collaborative research teams only.

Best

  • Support calls for adversarial collaborations in which proponents of competing or mutually exclusive theories agree on study design prior to conducting the research. 
  • Support networks of researchers that apply common practices in multi-site replication efforts.

Examples

Templates


To see what other research funders are doing, see this table of TOP Policies and other Open Access or Open Science practices.