Help support open science today.
Donate Now

SCORE

Systematizing Confidence in Open Research and Evidence

Project Overview

Assessing and Predicting Replicability of Social-Behavioral Science Findings: Call for Collaborators

There is substantial interest in the extent to which published findings in social-behavioral sciences are reproducible and whether it is possible to predict the likelihood of reproducing. Large-scale replication and prediction market projects in some subfields — particularly parts of psychology and economics — have provided initial evidence that reproducibility may be lower than expected or desired, and that surveys of experts and prediction markets may be effective at predicting reproducible findings. 

There is still much to learn about reproducibility across business, economics, education, political science, psychology, sociology, and other areas of social-behavioral sciences. In order to better assess and predict replicability of social-behavioral science findings, the Center for Open Science in partnership with Defense Advanced Research Projects Agency (DARPA) is working to help advance this understanding.

The project scope is as follows:

  • Create a database of about 30,000 papers published between 2009 and 2018 from 60+ journals in the social-behavioral sciences that publish primarily empirical, non-simulated research with human participants.
  • Sample about 3,000 papers from this population and code them using human and automated methods for primary claim, key design features, and key statistics, and merge data from other sources (e.g., altmetrics, citations, open data) to help assess the credibility of the original claims.
  • Conduct replications (new data) or reproductions (reanalysis of original data) of up to 300 of these papers.

Additional teams are currently assessing the 3,000 papers using humans or machines to generate predictions (scores) of the reproducibility of the primary findings. If successful, the project will introduce evidence for methods to rapidly assess the credibility of findings, and identify features that can improve credibility and reproducibility. Completing this project will require large-scale collaboration of experts across social-behavioral research communities.


Included Journals

These journals are likely to define the population of possible papers and findings eligible for inclusion in this project. The selection principles were to obtain good representation of journals from a defined set of social-behavioral science domains, achieve diversity for inclusion of subdisciplines within those domains, prioritize higher impact journals (citations/article) as defined by the Scimago database, prioritize larger journals that are likely to have at least 50 eligible papers/year, and prioritize journals that are likely to have papers eligible for inclusion in this project -- reporting empirical research with a statistical inference test corresponding to a research claim.


Replication and Reproduction Teams

Replication and reproduction teams follow open science best practice by preregistering the project design and analysis plan, preparing and sharing research materials and code, and — to the extent ethically possible — making data openly available. Teams provide a brief final report on the outcome of the replication attempt for integration into the full dataset. The coordinating team provides support and active management throughout the project.

Incentives for Replication/Reproduction Teams

Beyond participation in perhaps the largest collaborative social-behavioral research project ever conducted, participating individuals and teams will:

  • Receive training and support for implementing open science best practice on conducting a replication or reproduction of published research.
  • Be free to publish the results of their replication or reproduction study.
  • Be co-authors on aggregate reports and publications of the overall findings across all replication and reproduction studies.

SCORE_project_timeline_.original


(1) Study selection

The core team codes and prepares papers for possible replication or reproduction, which are then sourced with appropriate individuals or labs to conduct the research. The papers are randomly selected from the 3,000 being assigned confidence scores by human experts or automated methods.

(2) Matching labs with a study

Individuals or teams that are matched to a paper either conduct a high-powered replication or reanalyze the original data to reproduce the original finding. Individuals or labs interested in joining the project can complete this short interest form, and then sign up for a discussion list for regular updates about the matching and replication process. Matched labs attend a virtual onboarding session to review the process for their study.

(3) Study funds

Grants are available for individuals or teams that are matched with a study. This includes the amount of funding needed to conduct the research, the timeline of deliverables, and the terms of the agreement. This process can be done concurrently with seeking local ethics review and designing the preregistration. Note: To receive DARPA funds, the team must be able to receive IRB approval from one that has active federal wide assurance (FWA) approval. Labs that do not meet this requirement can participate in SCORE for studies that do not involve new data collection or for new data collections that do not require funds for collecting data with human participants.

(4) Pregistration Design

All replication and reproduction studies are preregistered. To learn more about preregistration see: https://cos.io/prereg/. Teams will use this template and make explicit the design decisions made to adapt the original study for replication or reproduction of the main claim. The primary goal is to design a fair test of the original claim. Once the preregistration has fully-specified the design and analysis plan, it is put through peer review. Teams receive $500 when their preregistration design is complete and ready to be sent for review.

(5) Pregistration Review

Every preregistration undergoes peer review to maximize quality of the replication studies and the clarity and completeness of the preregistrations. Independent reviewers from a reviewer pool and the original author(s) will have access to the design to provide comments and suggestions. Replication teams will work with them real-time to improve the design and resolve any open issues. An Editor monitors the process and facilitates resolution of sticking points if they occur. Editors and reviewers of this process are listed here. Once the preregistration is approved, replication teams are ready to collect data pending local and U.S. Federal ethics review approval if needed.

(6) Local Ethics Review

Replication teams submit the research protocol to their local research ethics committees. If a lab receives study funds from our team there are specific federal requirements to meet. The coordinating team provides guides to navigate those requirements.

(7) U.S. Federal Ethics Review

After approval from local ethics review, documentation will be submitted for U.S. federal ethics review. This is needed whenever federal study funds are used for human subjects research. The local ethics review committee must have an active FWA. Labs can check the FWA database, but should confirm with their local IRB as the database is not perfectly reliable. The core team assists collection of the documentation and submits it to the U.S. federal ethics review office.

(8) Beginning the replication/reproduction effort

While awaiting ethics approval, teams will draft their analysis scripts and final reports with placeholders for the analytic outcomes in a structured format. The format facilitates reusing as much of the content from the preregistration as possible to streamline the process and increase consistency between plans and reported outcomes. Preparing the reports during this waiting period, maximizes the amount of time for data collection and minimize time needed after that is complete for analysis and report writing. This is essential because of the assertive timeline mandated by the project funding.


 

Request for Proposals

Human Subjects Research replication and generalizability studies

Posted: Wednesday, May 26, 2021

As part of the DARPA SCORE program, the Center for Open Science is conducting replication and generalizability studies that require research involving the use of human participants. The studies are based on claims from empirical papers published between 2009 and 2018 in approximately 60 journals in the social and behavioral sciences. Claims are traces of statements from the abstract of the article to a statistical inference test contained in the paper.

We seek proposals from research teams to run and complete studies that use human subjects data (either to be, or already collected) and that requires an ethical review. Such projects could include:

  • A replication of a single identified claim.
  • A replication of multiple identified claims.
  • A generalizability study of an identified claim.
  • A replication using previously collected data that requires ethical review and approval prior to access.

The key criteria for proposed projects are:

  • Experience preparing and seeking ethical approval for human subjects research from a local ethics review board and concurrence by a U.S. federal human research protection office (HRPO).
  • Access and familiarity of data collection techniques and populations common to social-behavioral research claims (e.g., Qualtrics survey administered on MTurk platform).
  • Experience conducting replications in a rigorous and open process (e.g., preregistration) and reporting results in an open and reproducible manner (e.g., making data as open as ethically appropriate).
  • The proposed efficiency of the teams (e.g., number of studies that can be conducted over a given period of time) given a target completion data of November 30, 2021.
  • Cost for conducting the work (e.g., total cost, estimated costs per study).

All awarded proposals will require preregistration, local IRB/ethics approval, followed by an HRPO concurrence before the start of data collection. Selected proposals will be expected to return a final report, all associated variables, and adhere to all relevant licensing and intellectual property requirements for use of the database of papers made available for this research project.

A max of $10,000 budget is allowed for each individual replication and generalizability study included in the proposal. We expect to make 12 to 22 total awards. Proposals should be no longer than two pages and address the key criteria above. A cost and budget justification should be included (note: F&A/overhead are not allowable expenses). Questions and proposals can be submitted to scorecoordinator@cos.io. Proposals will be evaluated as they are received and the program will remain open until Wednesday, June 9 2021.

Selection of contracts will follow federal standards for procurement transitions for competitive bids (2 CFR 200.320).


 

 

Core Team

Replication and Reproduction Sourcing

Chris Chartier
Director, Psychological Science Accelerator 
Associate Professor, Psychology, Ashland University


Replication and Reproduction Preparation and Management

Melissa Kline
Research Scientist, Center for Open Science

Sam Field
Research Scientist, Center for Open Science

Nick Fox
Research Scientist, Center for Open Science

Andrew Tyner
Research Scientist, Center for Open Science

Anna Abatayo
Research Scientist, Center for Open Science

Zachary Loomas
Project Coordinator, Center for Open Science

Olivia Miske
Project Coordinator, Center for Open Science

Bri Luis
Project Coordinator, Center for Open Science


Data Enhancement

Simon Parsons
Data Manager, Center for Open Science

Titipat Achakulwisut
Graduate Student, University of Pennsylvania

Konrad Kording
Professor, Neuroscience & Bioengineering, University of Pennsylvania

Daniel Acuna
Assistant Professor, Information Studies, Syracuse University


Project Leadership

Beatrix Arendt
Program Manager, Center for Open Science

Tim Errington
Director of Research, Center for Open Science

Brian Nosek
Executive Director, Center for Open Science 
Professor, Psychology, University of Virginia