The Reforms Are Working: Evidence That the Credibility of Social-Behavioral Sciences Can Be Improved

November 9th, 2023,

The landscape of social-behavioral sciences, particularly psychology, underwent a significant transformation in the early 2010s. A series of events spurred a collective soul-searching within the academic community, raising doubts about the credibility of published research. Studies like the Reproducibility Project in Psychology (RPP) and the Many Labs projects sounded an alarm, highlighting that the literature was not as replicable as once assumed.

In response to this challenge, numerous solutions were proposed to increase the rigor, credibility, and replicability of research findings. These solutions included preregistration, increasing sample sizes, and more open sharing of research materials. However, a counter argument suggested that the low replicability of social-behavioral science findings was an inherent aspect of studying complex phenomena sensitive to numerous variables.

Enter a paper published in Nature Human Behaviour this week, suggesting that high rigor, even in complex social systems, is achievable. The paper reports the outcome of a six-year prospective replication project. Four social-behavioral research labs from the University of California, Santa Barbara (UCSB), Stanford University, the University of California, Berkeley (UC Berkeley), and the University of Virginia committed to adopting several rigor-enhancing practices and replicating each other's novel discoveries. Across 16 novel discoveries, extremely high replicability rates were observed. On average, independent replication effect sizes were 97% the size of the original lab’s confirmatory test. This demonstrates much higher replicability than prior systematic replication projects in which replication effect sizes were about 50% the size of original findings. This paper demonstrates that low replicability of social-behavioral scientific studies is not inevitable; the presumed rigor-enhancing behaviors are indeed associated with high replicability.

The principal investigators of the four labs were Jonathan Schooler of UCSB’s META Lab, Jon Krosnick of Stanford's Political Psychology Research Group, Leif Nelson at UC Berkeley's Haas School of Business, and Brian Nosek, who is affiliated with the University of Virginia and is the Executive Director of the Center for Open Science. John Protzko of UCSB’s META Lab and Central Connecticut State University lead project management of the whole effort.

The adoption of larger samples, preregistration, and transparent sharing of materials and protocols has been on the rise over the past decade, not only in social-behavioral sciences but across various scholarly domains. This paper serves as a proof-of-concept, enabled by the Open Science Framework (OSF) and advocated by the Center for Open Science (COS), that these solutions can indeed enhance replicability.

Moreover, major research stakeholders like journals, funders, and others are embracing policies and incentives to encourage greater rigor and transparency. This trend further accelerates the adoption of these behaviors, fostering a research environment that prioritizes the integrity and credibility of scientific findings.

Even though the extent to which these behaviors have benefits across a variety of research domains and methodologies remains unclear, the study implemented a "kitchen sink" approach, combining multiple behaviors to assess whether high replicability was attainable in principle.

As the metascience community continues to evaluate the benefits of open practices, it will become clearer how each specific behavior contributes and where their limitations lie in enhancing research credibility. Establishing replicability, after all, does not directly guarantee validity or importance. A highly replicable finding could still be flawed due to confounding variables or misinterpretation of evidence. Likewise, it could be highly replicable but ultimately trivial, offering little value in advancing research.

Scientists have identified challenges to the credibility of research findings, proposed solutions to improve, and adopted many of those solutions in practice. This paper suggests that these reforms are on the right track. The solutions proposed offer promising avenues for improving the scientific process, but the journey towards better research practices is an ongoing one, with further exploration and evaluation needed to unlock their full potential and to identify their limitations, and even counterproductive impacts. With an active metascience movement interrogating these questions, we are confident that a more robust, transparent, and reliable scientific landscape is within reach.

Recent Posts