Media Contact: pr@cos.io
A new paper published in the Proceedings of the National Academy of Sciences (PNAS) proposes a systems-level framework for evaluating the trustworthiness of research findings across methods and approaches.
The paper, “A Framework for Assessing the Trustworthiness of Research Findings”, is authored by a multidisciplinary group of research leaders with expertise in metascience, research integrity and assessment, and science communication: Brian Nosek (Center for Open Science; University of Virginia), David Allison (Baylor College of Medicine), Kathleen Hall Jamieson (University of Pennsylvania), Marcia McNutt and A. Beau Nielsen (National Academies of Sciences, Engineering, and Medicine), and Susan M. Wolf (University of Minnesota). All of the authors serve on the National Academies’ Strategic Council for Research Excellence, Integrity, and Trust; however, the paper is not an official output of the National Academies.
The framework outlines seven distinct components that contribute to trustworthy research findings: whether research is accountable, evaluable, evaluated, well-formulated, controls bias, reduces error, and is well-calibrated, with claims matching the evidence. These components are organized across three levels of analysis: features of the research itself, the researchers conducting and evaluating the work, and the organizations facilitating and supporting it, with illustrative examples of indicators at each level. Rather than defining trustworthiness through any single standard or metric, the authors emphasize that it emerges from research behaviors and systems that facilitate dialogue, scrutiny, critique, and cumulative knowledge building—as opposed to signals like prestige, reputation, or any single metric.
The paper further clarifies that trustworthiness is not synonymous with correctness. Rather, trustworthy research findings are those that contribute productively to scholarly dialogue about evidence and claims. Trustworthy findings are produced and evaluated in ways that make errors detectable and correction possible over time. The authors argue that this ability to detect and correct errors is what allows scientific knowledge to progress—and that when research findings are not trustworthy, progress becomes harder and slower. To support this, the framework provides shared principles at both the researcher and organizational levels, offering common language for researchers, institutions, journals, and funders grappling with how to assess research quality and credibility. The paper also notes that stronger indicators could support clearer communication about trustworthiness for audiences beyond the research community, including journalists, policymakers, and the public.
The authors observe that many existing approaches to research assessment rely on proxies, such as journal reputation or citations, that can obscure meaningful differences in how research is conducted and evaluated. The proposed framework instead foregrounds observable behaviors and supporting systems, while acknowledging that translating these principles into valid, scalable, and generalizable indicators remains an ongoing challenge for the research community.