Our Blog

Building a Publishing Model for Replication: Q&A with the Senior Editors of Replication Research

Written by Center for Open Science | Feb 27, 2026 1:41:40 PM

Replication is widely recognized as essential to credible science, yet research incentive systems haven't always supported it. In many fields, replication studies remain difficult to publish, undervalued in career advancement, and disconnected from the infrastructure needed for transparent workflows. Meanwhile, conversations about research reform—from open access to new evaluation models—have accelerated across the scholarly ecosystem.

Replication Research (R2) emerged at this intersection. Launched as a community-led Diamond Open Access journal, R2 makes replication studies more discoverable, publishable, and rigorously evaluated—without subscription barriers or author fees. Its model reflects broader efforts to realign publishing with core research values: transparency, methodological rigor, and knowledge as a public good.

Ahead of Love Replications Week (March 2-6), we spoke with Flavio Azevedo, Lukas Wallrich, and Lukas Röseler, senior editors of R2, about the vision behind launching it, how it ensures quality and reproducibility, and where the journal fits in the shifting landscape of research assessment and scholarly communication.


Q: How did R2 come together? What gap(s) in the current publishing landscape was it created to address?

A: At the heart of R2 is our mission to support reproduction and replication studies. We have been collecting replications from many different disciplines in the FORRT Replication Database for years now and saw that lots of them are difficult to discover because they are buried deep in meta-scientific papers or because they are only published as preprints. We also knew that getting them published in journals was difficult. At the same time, publications are most researchers’ currency. Thus it seemed logical that if we wanted to motivate researchers to conduct repetitions, there needed to be more journals that publish them.


Q: What barriers do researchers face when publishing replication work, and how does R2 address them?

A: First, although it shouldn’t, repeating somebody’s work feels a little personal. This doesn’t go well together with most areas’ culture of criticism, so researchers may feel uncomfortable about criticizing authors of an original study. Second, repetitions are often assessed based on their outcomes: If you fail to replicate the study, you did something wrong and if you succeeded, it is what everybody expected anyway, so no need to report about it either way. Third, researchers are rarely taught how to approach repetitions. There are many special features that are not part of common curricula such as how and when to deviate from the original study, whether to attempt a reproduction or a replication, or how to formally define replication success.

As a journal but also as a community, we support researchers in overcoming all of these barriers: We invite a mix of original and independent reviewers to engage in open peer review and we enforce reviewer guidelines that must not make assessments based on outcomes but make constructive recommendations about method and rationale. We also encourage submissions of Registered Reports directly to us or via PCI-RR and optionally offer results-blind review of reproduction studies. To compensate for the lack of teaching, we conduct annual workshops on reproduction and replication studies and have been writing a free handbook with experts from various fields. 


Q: R2 was launched as a community-led, Diamond Open Access journal rather than in partnership with a commercial publisher. What motivated that decision?

A: Like every FORRT project, the entire process of creating this journal was open for everybody to join. We had an online discussion series, a hybrid symposium, and an open call for contributors. This way of having a community create a journal to their liking mirrored our concept of research (and scholarship more generally): that the knowledge that we produce is a public good. Subscription publishing models, hybrid journals, and Gold Open Access do not treat knowledge like a public good, but Diamond Open Access does. Any researcher can submit and read what we publish. They don’t need to be situated in a specific country or at a specific institution. We do not exclude people because they cannot pay for access to research.


Q: How does R2 approach quality, rigor, and reproducibility in its editorial and review processes—particularly for work that may challenge established findings?

A: R2’s quality assurance rests on four pillars. Due to these, authors publishing with us will make the strongest case possible that they care about quality and reproducibility:

First, all submissions that meet the journal scope undergo an initial editorial assessment. This contains manual checks (e.g., disclosures of potential COIs, data transparency), automated checks by Metacheck (e.g., reference consistency, effect size reporting, URL integrity, citation of retracted studies) and an LLM prompt that the editorial office filters after reading the article. We either ask for revisions or forward a list of issues that should be addressed during peer review to the handling editor.

Second, by default, peer review is open. That is, names are not anonymized (though early career researchers can request pseudonymization if they are afraid of retaliation by the authors) and the review reports that are created by the journal are published after the final decision with all contributors credited. Unlike most other journals, we also publish reviews for rejected articles so that other journals can re-use them.

Third, after acceptance, we conduct a check for numerical reproducibility that is based on the CODECHECK ecosystem. While we do not review code or check for robustness, we certify that the code worked on a different machine and produced the same results.

Fourth, we have an optional Social Responsibility Evaluation (SRE). With R2, we want to offer a place for findings that researchers can trust. At the same time, we anticipate receiving submissions that attempt to replicate findings with high societal relevance, such as on vaccination or social media consumption. In such cases, anybody who is involved in the review process can trigger the SRE: In addition to reviewers, we will invite a social responsibility champion who will review the manuscript’s section that contextualizes the finding, for example by discussing parts of the history and potential implications for marginalized groups.


Q: How does R2 fit into broader changes in how research is shared, evaluated, and sustained?

A: Having relied on shortcuts for decades, we think we have all begun to feel the repercussions: Commercial publishers are taking our research hostage and increase prizes, a focus on citation metrics has given rise to papermills and predatory publishers, and core values such as replicability have been neglected to a point where we seriously risk the trust of citizens in research. Lots of institutions are struggling, and the question arises: Should we lay off people or should we cancel the subscription to journals from a commercial publisher? And this does not even touch the problems of fraud and AI slop that are driving us towards more rigorous quality control.

To solve these problems, research assessment and research infrastructure have to be changed simultaneously. With the Declaration on Research Assessment and Coalition for Advancing Research Assessment, we are moving towards assessing quality and not quantity. With the provision of R2, we are doing our part to offer open research infrastructure that aligns with what we consider good research. We are of course not the first to recognize this. There are already over 50 documented cases of editorial teams resigning, often to create their own Diamond Open Access journals, with more in the works. 


Q: What are some ways that members of the research community can get involved with R2 and replication-focused efforts more broadly?

A: R2 is not something that a small group of people can achieve. It is but one piece of a holistic approach to support repetitions, one of many projects of the FORRT Replication Hub, to which over 250 people have contributed so far. To learn about everything that is going on there, you can join the open Slack channel or sign up for Love Replications Week in early March. Take a look at some of our projects, and if you have an idea about something that you would like to see added or improved, just get in touch.

For example, you can browse through over one thousand replications in the FReD Explorer and run your literature list through the Annotator. The latter will tell you which of the studies from the list that you enter have been subject to replication attempts. You will certainly notice that most studies have not been replicated (yet!) and want to do it yourself. For this, the Institute for Replication organizes regular replication games—a great opportunity to meet other researchers interested in repeatability—or you could just conduct an independent replication. Our handbook guides you through the entire process from choosing a target to publishing. We are not the only journal for replications, but if you choose one, we highly recommend going for a Diamond Open Access journal.