Systematizing Confidence in Open Research and Evidence
There is substantial interest in the extent to which published findings in social-behavioral sciences are reproducible and whether it is possible to predict the likelihood of reproducing. Large-scale replication and prediction market projects in some subfields — particularly parts of psychology and economics — have provided initial evidence that reproducibility may be lower than expected or desired, and that surveys of experts and prediction markets may be effective at predicting reproducible findings.
There is still much to learn about reproducibility across business, economics, education, political science, psychology, sociology, and other areas of social-behavioral sciences. In order to better assess and predict replicability of social-behavioral science findings, the Center for Open Science in partnership with Defense Advanced Research Projects Agency (DARPA) is working to help advance this understanding.
The project scope is as follows:
Additional teams are currently assessing the 3,000 papers using humans or machines to generate predictions (scores) of the reproducibility of the primary findings. If successful, the project will introduce evidence for methods to rapidly assess the credibility of findings, and identify features that can improve credibility and reproducibility. Completing this project will require large-scale collaboration of experts across social-behavioral research communities.
These journals are likely to define the population of possible papers and findings eligible for inclusion in this project. The selection principles were to obtain good representation of journals from a defined set of social-behavioral science domains, achieve diversity for inclusion of subdisciplines within those domains, prioritize higher impact journals (citations/article) as defined by the Scimago database, prioritize larger journals that are likely to have at least 50 eligible papers/year, and prioritize journals that are likely to have papers eligible for inclusion in this project -- reporting empirical research with a statistical inference test corresponding to a research claim.
Replication and reproduction teams follow open science best practice by preregistering the project design and analysis plan, preparing and sharing research materials and code, and — to the extent ethically possible — making data openly available. Teams provide a brief final report on the outcome of the replication attempt for integration into the full dataset. The coordinating team provides support and active management throughout the project.
Beyond participation in perhaps the largest collaborative social-behavioral research project ever conducted, participating individuals and teams will:
The core team codes and prepares papers for possible replication or reproduction, which are then sourced with appropriate individuals or labs to conduct the research. The papers are randomly selected from the 3,000 being assigned confidence scores by human experts or automated methods.
Individuals or teams that are matched to a paper either conduct a high-powered replication or reanalyze the original data to reproduce the original finding. Individuals or labs interested in joining the project can complete this short interest form, and then sign up for a discussion list for regular updates about the matching and replication process. Matched labs attend a virtual onboarding session to review the process for their study.
Grants are available for individuals or teams that are matched with a study. This includes the amount of funding needed to conduct the research, the timeline of deliverables, and the terms of the agreement. This process can be done concurrently with seeking local ethics review and designing the preregistration. Note: To receive DARPA funds, the team must be able to receive IRB approval from one that has active federal wide assurance (FWA) approval. Labs that do not meet this requirement can participate in SCORE for studies that do not involve new data collection or for new data collections that do not require funds for collecting data with human participants.
All replication and reproduction studies are preregistered. To learn more about preregistration see: https://cos.io/prereg/. Teams will use this template and make explicit the design decisions made to adapt the original study for replication or reproduction of the main claim. The primary goal is to design a fair test of the original claim. Once the preregistration has fully-specified the design and analysis plan, it is put through peer review. Teams receive $500 when their preregistration design is complete and ready to be sent for review.
Every preregistration undergoes peer review to maximize quality of the replication studies and the clarity and completeness of the preregistrations. Independent reviewers from a reviewer pool and the original author(s) will have access to the design to provide comments and suggestions. Replication teams will work with them real-time to improve the design and resolve any open issues. An Editor monitors the process and facilitates resolution of sticking points if they occur. Editors and reviewers of this process are listed here. Once the preregistration is approved, replication teams are ready to collect data pending local and U.S. Federal ethics review approval if needed.
Replication teams submit the research protocol to their local research ethics committees. If a lab receives study funds from our team there are specific federal requirements to meet. The coordinating team provides guides to navigate those requirements.
After approval from local ethics review, documentation will be submitted for U.S. federal ethics review. This is needed whenever federal study funds are used for human subjects research. The local ethics review committee must have an active FWA. Labs can check the FWA database, but should confirm with their local IRB as the database is not perfectly reliable. The core team assists collection of the documentation and submits it to the U.S. federal ethics review office.
While awaiting ethics approval, teams will draft their analysis scripts and final reports with placeholders for the analytic outcomes in a structured format. The format facilitates reusing as much of the content from the preregistration as possible to streamline the process and increase consistency between plans and reported outcomes. Preparing the reports during this waiting period, maximizes the amount of time for data collection and minimize time needed after that is complete for analysis and report writing. This is essential because of the assertive timeline mandated by the project funding.
Chris Chartier
Director, Psychological Science Accelerator
Associate Professor, Psychology, Ashland University
Melissa Kline
Research Scientist, Center for Open Science
Sam Field
Research Scientist, Center for Open Science
Nick Fox
Research Scientist, Center for Open Science
Andrew Tyner
Research Scientist, Center for Open Science
Anna Abatayo
Research Scientist, Center for Open Science
Zachary Loomas
Project Coordinator, Center for Open Science
Olivia Miske
Project Coordinator, Center for Open Science
Bri Luis
Project Coordinator, Center for Open Science
Simon Parsons
Data Manager, Center for Open Science
Titipat Achakulwisut
Graduate Student, University of Pennsylvania
Konrad Kording
Professor, Neuroscience & Bioengineering, University of Pennsylvania
Daniel Acuna
Assistant Professor, Information Studies, Syracuse University
Beatrix Arendt
Program Manager, Center for Open Science
Tim Errington
Director of Research, Center for Open Science
Brian Nosek
Executive Director, Center for Open Science
Professor, Psychology, University of Virginia
Center for Open Science
210 Ridge McIntire Road
Suite 500
Charlottesville, VA 22903-5083
Email: contact@cos.io
Current positions are here.
COS has consistently earned a Guidestar rating of Platinum for its financial transparency, the highest rating available. You can see our profile on Guidestar here.
We invite all of our sponsors, partners, and members of the community to learn more about how
our organization operates, our
impact, our financial performance,
and our nonprofit status.
Unless otherwise noted, this site is licensed under a Creative Commons Attribution 4.0 International License.
All images and logos are licensed under Creative Commons Attribution-NoDerivatives 4.0 International License.
Terms of Use | Privacy Policy