The Decline Effect: Perceptions Versus Evidence

November 28th, 2023,

My first experience attending a conference with Extrasensory Perception (ESP) researchers was in October 2012. Unexpectedly, we had something in common. I was interested in how the system of professional incentives in research that favors positive and novel results might be distorting the evidence base in published findings. The ESP researchers were sharing that initially promising findings demonstrating precognition seemed to disappear upon trying to replicate them. We were brought together at UC Santa Barbara by Jonathan Schooler, who was then writing about the “decline effect” -- how findings across research domains fade away as they are investigated further.

The meeting offered a fascinating and diverse mix of explanations for the decline effect. Jon Krosnick talked about problems with sampling and heterogeneity. Leif Nelson talked about p-hacking. Some ESP researchers offered unconventional explanations suggesting that the act of observation affected the existence of the phenomenon. Schooler, well, he happily entertained all of these possibilities.

Other things were happening at the same time in research. Nelson and colleagues were working on the p-curve to assess the evidential value of a set of findings. Krosnick was replicating key survey research findings under a variety of sampling conditions. Schooler would soon begin participating in the first registered replication report at Perspectives on Psychological Science of his classic verbal overshadowing effect. And, Jeff Spies and I were in discussions with philanthropists and private funders that would lead to launching the non-profit Center for Open Science (COS) in early 2013, to take over two projects from my lab -- the Open Science Framework (OSF) and the Reproducibility Project: Psychology -- and then some

The meeting at UC Santa Barbara ended with lots of ideas and interest, but not a lot of evidence other than the fact that the decline effect shows up…everywhere.

A basic question was: is the decline effect inevitable? What would happen if we designed a study to make discoveries and replicate them many times? What if we designed it to rule out all of the conventional explanations of publication bias, p-hacking, low rigor, imprecise estimation, and weak methodology? Could we get rid of the decline effect?

Krosnick, Nelson, Schooler, and I were intrigued -- so were Jan Walleczek and Bruce Fetzer from the Fetzer Franklin Fund who sponsored the conference. We gathered again in 2013 and designed a prospective replication study. Each of our four labs would do their “typical” discovery work in our domains and then, whenever we believed we found something interesting, that finding would be put through a systematic process that included everything we could think of that could maximize the quality, rigor, and replicability of the finding: a confirmatory study, large sample sizes, replications by each lab, sharing materials across labs, and preregistration of all studies. These were all the types of actions that were gaining interest and early adoption in our fields as possible ways to avoid those conventional explanations for declining replicability. 

Now, 10 years later, so much has happened. COS, along with other reform-minded individuals and groups, has enabled transparency and rigor-enhancing practices across research communities, with strongest adoption so far in the social-behavioral sciences. And, after many years of effort, the outcomes of this prospective replication project were published in Nature Human Behaviour. The bottom line finding: we observed very high replicability, no decline effect at all. As a proof-of-concept, it seems that the decline effect is not inevitable, and perhaps the accelerating adoption of these practices across the research community will result in similarly notable improvements in the replicability of findings more generally.

The last decade has been a deeply fulfilling experience. An interdisciplinary community of reform-minded researchers and stakeholders is pursuing many innovations to improve the credibility of research. An interdisciplinary community of metascience researchers is assessing the state of current practice and evaluating the effectiveness of the reformers’ innovations. And, an interdisciplinary community of researchers is testing, adapting, improving, and adopting these rigor-enhancing practices into their work. Together, they are fostering a global shift in research norms and standards toward open scholarship.

Looking ahead, it will be crucial over the next 10 years to use the innovation and momentum created by these communities of practice and scale the effective solutions into sustained changes to the research culture. COS has a role to play in scaling culture change by providing open source infrastructure that makes the practices possible, connecting with other tools researchers use to make the practices easy, connecting and catalyzing communities to make the practices normative, working with stakeholders to change incentives to make the practices rewarding, and supporting changes in policies to make the practices required.

COS co-creates, maintains, and makes available open research public goods for the public good. To continue doing so, we rely on the generous support of individual donors who believe in our mission and want to improve and accelerate discovery. Thank you for your help as we head into a second decade of improving openness, integrity, and reproducibility of research.

Consider supporting the Center for Open Science with a tax-deductible donation before December 31.

Donate Now

Recent Posts