Impact of Registered Revisions Within the Journal Peer Review Process

Interested? Get in touch! Email Noah Haber ( and Macie Daley (

Project Overview

Registered Revisions

Publication pre-commitment devices such as Preregistration and Registered Reports may substantially reduce publication biases, prepublication biases (e.g. p-hacking and HARKING), and other questionable research practices. This study explores a related device, Registered Revisions.

Registered Revisions are a pre-commitment device that are like a miniature registered report that occurs during journal peer review. When reviewers ask for additional data and/or analysis, authors can propose and detail a protocol of how those data additional revisions will be performed. Reviewers and editors can then agree to In-Principle Accept (IPA) the publication on the basis of this protocol, regardless of what the results are.

In theory, this style of review may reduce the impact of questionable research practices, publication biases, reduce uncertainty about peer review, and decrease review timelines through preventing back and forth multiple rounds of review.


The Design

The Center for Open Science (COS) is leading a semi-centrally organized set of within-journal randomized experiments on Registered Revisions under one umbrella. COS provides design and support for journals and journal consortia to perform in-journal randomized experiments testing Registered Revisions.

Between journals and journal consortia

rc3t mini 1 crop


Within journals and journal consortia

r3ct mini 2 crop


Getting Involved

COS is currently organizing a pilot study. If you are a journal editor or publisher interested in being a part of this experiment, please email Noah Haber ( and Macie Daley (

Additional details, including detailed protocols and the data and code repository will be made available at our OSF page here:

This research is funded by the NSF (grant #2152424)

Umbrella Design

rct graph 2

Rather than one study with many journals, COS is fostering a many-studies approach, under the umbrella of a prospective living meta-analysis. This design helps foster:

  • A feasible approach to policy implementation experiments that would otherwise require an unrealistic degree of coordination and editorial homogeneity
  • Represents a more realistic roll-out of the registered revisions policy, as each journal or journal group will be implementing it their way
  • Shares experience and guidance across journal editorial teams
  • Guarantees that small trials are a part of a larger evidence base, preventing research waste

This research is funded by the NSF (grant #2152424)


The expected outcomes are a combination of process outcomes (e.g. time to reaching final decisions. acceptance/rejection rates, etc), research outcomes (e.g. statistical significance, effect sizes, etc.), and satisfaction outcomes (e.g. how researchers and editors feel about the process). 


Primary Outcomes

  • Measures of statistical significance and/or uncertainty (e.g. p-values, standard errors, and confidence intervals) for new data/analysis revisions
  • Proportion of articles accepted
  • Timelines from revision to final decisions

Secondary outcomes

  • Journal and author subjective experience with registered revisions
  • Any outcomes of interest to the journal(s)  

This research is funded by the NSF (grant #2152424)

How the Collaboration Works

COS provides strong study design and implementation support, while the journal (or journal group) editorial team implements their own logistical procedures and take on the design.


On the Journal

Journals have full ownership of their RCT. Each RCT is specific to the journals’ individual needs and preferences for:

  • Specific intervention policy design
  • Schedule / timelines
  • Outcomes measured 

Each individual study is expected to be its own publication, with journal partners being the main authors.

In addition, the main journal implementation team is expected to be coauthor on the  COS-led meta-analysis.

COS Provides

Boilerplate study design, including:

  • Protocol
  • Survey materials
  • Data collection procedures
  • Data analysis design
  • IRB pathway and materials
  • Centralized communication
  • Implementation, design, and logistical support
  • Assurance that small scale experiment contributes to large scale, high quality evidence

This research is funded by the NSF (grant #2152424)

RCT Study Design

rct chart newEligibility

Standard manuscript submissions that receive a revise-&-resubmit decision that requests for new data are eligible. Editorial team identifies eligible submissions before randomization between initial editorial decision, and senior/final decision.



If consent is obtained, author team is randomized to standard procedure or RR
If RR arm:

  • Author team drafts a protocol to describe procedures for new data analysis and outcomes, before collecting new data
  • Editors and/or peer reviewers review the protocol and (if/when approved) issue an In Principle Acceptance (IPA) regardless of the results.
  • Journal team tracks outcomes and implements any data collection

This research is funded by the NSF (grant #2152424)

Next Steps

Current Phase: Piloting

COS is looking for partner journals to test the idea, and help inform design decision-making, and gain experience useful for all other journals.

These initial experiments will be more intensely involved with COS, and could even include embedding COS researchers in the journal editorial team as part of the peer review process.

Journals in the pilot process will be first to publish their findings, will support the longer term efforts, and will have strong influence on future designs and research.

Timeline: COS is aiming to begin initial pilot studies by Q4 2023, begin main journal RCTs by Q2 2024, and complete living review by Q2 2027.


Where does this fit into COS’ agenda?

This project is part of a larger umbrella examining the impact of Registered Reports. This includes randomized trials at the idea phase, data collection phase, and additional journal-level experiments.

This effort is a first of its kind in several ways, including:

  • New framework and method for large scale collaborative study
  • New kind of within-journal policy experimentation
  • Subject of interest is a modern intervention to improve publication outcomes

We hope that this paves the way for future experimental study.

This research is funded by the NSF (grant #2152424)


The pilot for this project is a pathfinding project to prepare for the main phase. The ultimate goal of the pilot phase is to successfully implement multiple test versions of the main phase. In this pilot we will be:

  • Generating the infrastructure needed to implement these trials
  • Writing guidance
  • Documenting what worked (and more importantly, what didn't)
  • Exploring different variations of the intervention
  • Charting a path for how to manage this project within editorial submission systems.

By the end of the pilot, we will have built the full kit needed for journals to successfully run their own experiments and gained a large amount of experience for support. Participating journal editors will be coathors on at least one published manuscript, plus any additional projects that spin off from this main project.

The main deliverables of the pilot phase are:

  • Full kit of documentation and recommendations for journals to apply their own experiments
  • Manuscript describing the process, the experiments run, and (possibly) an early look at results


We are planning on having 5-10 pilot journals involved, and will be updating this page with the journals shortly.

If you would like to join the pilot group, we would love to have you! Please e-mail Noah Haber ( and Macie Daley (

Get in Touch!

If you are a journal editor potentially interested in participating in any part of this project,  e-mail Noah Haber ( and Macie Daley ( You can also sign up for our e-mail list for updates below.