Impact of Registered Revisions Within the Journal Peer Review Process

Interested? Get in touch! Email Noah Haber ( and Macie Daley (

Project Overview

Evidence-based journal policy change through experimentation

This project aims to push the boundaries in evidence-based policy making for science. It is 1) A novel pre-commitment device for peer review called Registered Revisions, 2) The first randomized experiment of journal policies of its type, and 3) The first semi-centralized study-in-a-kit style prospective meta-analysis.

Registered Revisions

Publication pre-commitment devices such as Preregistration and Registered Reports may substantially reduce publication biases, prepublication biases (e.g. p-hacking and HARKING), and other questionable research practices. This study explores a related device, Registered Revisions.

Registered Revisions is a pre-commitment device comparable to a miniature registered report that occurs during journal peer review. When reviewers ask for additional data and/or analysis, authors can propose and detail a Revision Plan of how those data additional revisions will be performed. Reviewers and editors can then agree to In-Principle Accept (IPA) the publication on the basis of this Revision Plan, regardless of what the results are.

In theory, this style of review may reduce the impact of questionable research practices, publication biases, reduce uncertainty about peer review, and could even decrease review timelines through preventing back and forth multiple rounds of review.


The Design

The Center for Open Science (COS) is leading a semi-centrally organized set of within-journal randomized experiments on Registered Revisions under one umbrella. COS provides design and support for journals and journal consortia to perform in-journal randomized experiments testing Registered Revisions.

Between journals and journal consortia

rc3t mini 1 crop


Within journals and journal consortia

r3ct mini 2 crop


Getting Involved

COS is currently organizing a pilot study. If you are a journal editor or publisher interested in being a part of this experiment, please email Noah Haber ( and Macie Daley (

Additional details, including detailed protocols and the data and code repository will be made available at our OSF page here:

This research is funded by the NSF (grant #2152424)

Meta Trial Design

rct graph 2

Rather than one study with many journals, COS is fostering a many-studies approach, under the umbrella of a prospective living meta-analysis. This design helps foster:

  • A feasible approach to policy implementation experiments that would otherwise require an unrealistic degree of coordination and editorial homogeneity
  • A more realistic roll-out of the registered revisions policy, as each journal or journal group will be implementing it their way
  • Shared experience and guidance across journal editorial teams
  • Guarantee that small trials are a part of a larger evidence base, preventing research waste

We are calling it a semi-centralized prospective meta-analysis, backward. In order to create a useful evidence base for Registered Revisions, we would ideally want many realistic within-journal experiments. The trick is that COS and partners are collaboratively fostering the creation and administration of the individual studies that will eventually fill in that prospective meta-analysis.

To see how the collaboration works, see the next tab.

This research is funded by the NSF (grant #2152424)

How the Collaboration Works

COS provides strong study design and implementation support, while the journal (or journal group) editorial team implements their own logistical procedures and take on the design.


Provided and supported by COS

Study-in-kit design, including:

  • Protocol with choose-your-own-adventure options specific to your own needs
  • Survey materials
  • Data collection procedures
  • Data analysis design and code
  • Descriptive language
Infrastructure and logistics, including:
  • A pre-approved IRB pathway
  • Data collection systems
  • Integration with editorial processes

Support, including:

  • Experience
  • Training materials
  • Centralized communication with others running similar studies
  • Assurance that small scale experiment contributes to large scale, high quality evidence


Run and owned by the Journals

Journals have full ownership of their own randomized trials. In other words, it's your experiment, your data. Each trial is specific to the journals’ individual needs and preferences for:

  • Specific variation on intervention policy design
  • Schedule / timelines
  • Logistics and procedures
  • Variations on Outcomes measured 

Each individual study is expected to be its own publication, with journal partners being the main authors.

In addition, the main journal implementation team is expected to be coauthor on the  COS-led meta-analysis.

This research is funded by the NSF (grant #2152424)

Individual Trial Design

rct chart newAll trials under this project umbrella follow roughly the same basic trial design, starting with who is eligible.


Standard manuscript submissions that receive a revise-&-resubmit decision that requests for new data are eligible. Editorial team identifies eligible submissions before randomization between initial editorial decision, and senior/final decision.


Trial Arms

If consent is obtained and eligible peer review comments are identified, author team is randomized to standard procedure or Registered Revisions
If Registered Revisions arm:

  • The author team drafts a Revision Plan to describe procedures for new data analysis and outcomes, before collecting new data or performing new planned data/analysis
  • Editors and/or peer reviewers review the Revision Plan and (if/when approved) issue an In Principle Acceptance (IPA) to accept the article, regardless of the results of the new planned data/analysis.

If Standard Procedures arm:

  • The author team, editors, and peer reviewers procede through the peer review and revisions process as would be standard for that journal.

For either arm, the trial team tracks key outcomes data using COS-designed data collection systems, as discussed in the next tab.

This research is funded by the NSF (grant #2152424)

Trial Outcomes

The expected outcomes are a combination of process outcomes (e.g. time to reaching final decisions. acceptance/rejection rates, etc), research outcomes (e.g. statistical significance, effect sizes, etc.), and satisfaction outcomes (e.g. how researchers and editors feel about the process). 


Primary Outcomes

  • Measures of statistical significance and/or uncertainty (e.g. p-values, standard errors, and confidence intervals) for new data/analysis revisions
  • Proportion of articles accepted
  • Timelines from revision to final decisions

Secondary outcomes

  • Journal and author subjective experience with registered revisions
  • Any outcomes of interest to the journal(s)  

This research is funded by the NSF (grant #2152424)

Next Steps

Current Phase: Piloting

COS is looking for partner journals to test the idea, and help inform design decision-making, and gain experience useful for all other journals.

These initial experiments will be more intensely involved with COS, and could even include embedding COS researchers in the journal editorial team as part of the peer review process.

Journals in the pilot process will be first to publish their findings, will support the longer term efforts, and will have strong influence on future designs and research.

Timeline: COS is aiming to begin initial pilot studies by Q2 2024, begin main journal RCTs by Q4 2024, and complete living review by Q2 2027.


Where does this fit into COS’ agenda?

This project is part of a larger umbrella examining the impact of Registered Reports. This includes randomized trials at the idea phase, data collection phase, and additional journal-level experiments.

This effort is a first of its kind in several ways, including:

  • New framework and method for large scale collaborative study
  • New kind of within-journal policy experimentation
  • Subject of interest is a modern intervention to improve publication outcomes

We hope that this paves the way for future experimental study.

This research is funded by the NSF (grant #2152424)

Pathfinding Pilot

The pilot for this project is a pathfinding project to prepare for the main phase. The ultimate goal of the pilot phase is to successfully implement multiple test versions of the main phase. In this pilot we will be:

  • Generating the infrastructure needed to implement these trials
  • Writing guidance
  • Documenting what worked (and more importantly, what didn't)
  • Exploring different variations of the intervention
  • Charting a path for how to manage this project within editorial submission systems.

By the end of the pilot, we will have built the full kit needed for journals to successfully run their own experiments and gained a large amount of experience for support. Participating journal editors will be coathors on at least one published manuscript, plus any additional projects that spin off from this main project.

The main deliverables of the pilot phase are:

  • Full kit of documentation and recommendations for journals to apply their own experiments
  • Manuscript describing the process, the experiments run, and (possibly) an early look at results

We are planning on having 5-10 pilot journals involved, and will be updating this page with the journals shortly.

Currently, the following journals are official partners for the pilot:

  • Evidence-Based Toxicology
  • PLOS Biology
  • The Leadership Quarterly
  • Scientific Reports
  • Analyses of Social Issues and Public Policy
  • Science and Medicine in Football
  • more soon....

If you would like to join the pilot group, we would love to have you! Please e-mail Noah Haber ( and Macie Daley (

Workflows, details, and documentation

Concerned about workflows on implementing the trial and the Registered Revisions policy? So are we! We have been working on developing everything needed to make this work in actual journals.

Workflow Overview Examples

We have developed and are currently testing any array of tools and documentation focused on usability, including:

  • A web-based participant tracking and data logging system that does not rely on editorial management software
  • Template documents for use during the editorial process
  • Step-by-step guidance for journal editors
  • Integrated instructions for participants and reviewers
  • Experience working with these systems that we can share with the community

Most of the current documents are available on our project page, which will be continually updated throughout the project.

Get in Touch!

If you are a journal editor potentially interested in participating in any part of this project,  e-mail Noah Haber ( and Macie Daley ( You can also sign up for our e-mail list for updates below.