Center for Open Science Expands Systematizing Confidence in Open Research and Evidence (SCORE) Program Efforts

Nov. 14, 2023

Charlottesville, VA - The Robert Wood Johnson Foundation (RWJF) has awarded a grant to the Center for Open Science (COS) to enhance the development of automated confidence evaluation of research claims. 

COS and its collaborators will extend their work from the Systematizing Confidence in Open Research and Evidence (SCORE) program, administered by the U.S. Defense Advanced Research Projects Agency. The SCORE program demonstrated the potential of using algorithms to evaluate claims on a large scale efficiently. With funding from  RWJF, COS enters a research and development phase to revolutionize research claim assessment by providing scalable and valid algorithmic methods.

“The SCORE project is a key part of COS’s mission of improving the rigor, reproducibility, and openness of research by creating scalable tools for assessing research credibility,“ said Tim Errington, COS’s Senior Director of Research. “With the support of the Robert Wood Johnson Foundation, new research, development, and testing will extend the technologies that emerged from SCORE. These will be a benefit to all stakeholders – including research communities, policymakers, practitioners, and the broader public. These technologies have great potential to advance the reproducibility of research and to assess the confidence one should have in the claims generated.”

At scale, confidence assessment technology has the potential to empower research producers in identifying evidence weaknesses, alert research consumers to potential misinformation for closer scrutiny, aid policymakers in allocating additional resources for confident decision-making, and assist practitioners in making informed assessments to guide urgent actions.

The SCORE program supplements existing research evaluation methods, including human judgment, evidence aggregation, and systematic replication. Under the RWJF’s grant, COS, in partnership with researchers at the University of Melbourne and The Pennsylvania State University, will advance the development of these efforts in four ways:

  • Expanding beyond the core social and behavioral science disciplines covered by SCORE by adding assessment of health research. Extending the program into new disciplines will help establish how generalizable the emerging algorithm solutions can be and will engage more research communities in addressing concerns about research credibility.
  • Increasing the number of algorithms providing credibility assessments through a public competition. SCORE involved four algorithmic approaches that generated independent credibility assessments. Expanding to a broader range of algorithm solutions will strengthen the overall accuracy of predictions and decrease the risk of algorithmic bias by developing many more approaches than the original program generated.
  • Prototyping algorithmic assessment of claims within the research community to learn how researchers perceive automated scoring of research credibility and what could make it most useful to this community.
  • Conducting research and user-testing on the best ways to convey algorithm scores to encourage appropriate use and mitigate risks.

This approach aims to make confidence assessment more scalable and readily deployable. It establishes a robust evidence base to guide its appropriate use and enables automated confidence assessment to complement human processes in evaluating, prioritizing, and applying research findings. At scale, confidence assessment technology has the potential to empower research producers in identifying evidence weaknesses, alert research consumers to potential misinformation for closer scrutiny, aid policymakers in allocating additional resources for confident decision-making, and assist practitioners in making informed assessments to guide urgent actions.

Fiona Fidler, Professor of History and Philosophy of Science and co-director of MetaMelb, at the University of Melbourne, noted, “SCORE is creating new efforts that are an exciting opportunity to extend our research methodology and advancing dataset about confidence assessment that captures the diversity and depth of how humans reason about and judge evidence.” 

Sarah Rajtmajer, Assistant Professor of Information Sciences and Technology at The Pennsylvania State University and lead of one of the four AI teams from SCORE noted, “Through the last four years, we have been developing AI capable of scoring the replicability of claims in a published scholarly paper. Our four teams have pushed the state of the art in information retrieval, natural language processing and understanding, knowledge graphs, interpretable machine learning, and artificial markets. But the more we dig in, the more clear it has become that we still have a long way to go before we have an AI that can reliably make these incredibly nuanced assessments. Through this new funding from RWJF, we can continue to develop algorithms to support confidence assessment of scholarly work and expand collaboration and integration with a much broader community of researchers.” 

###

About the Center for Open Science
Founded in 2013, COS is a nonprofit culture change organization with a mission to increase openness, integrity, and reproducibility of scientific research. COS pursues this mission by building communities around open science practices, supporting metascience research, and developing and maintaining free, open source software tools, including the Open Science Framework (OSF). Learn more at cos.io.

Contact: Alexis Rice
alexis@cos.io or (434) 207-2971

Recent News