At a time when transparency and trust in research are more important than ever, the Open Science Framework (OSF) enables researchers to share the plans, data, materials, code, outcomes, and reports of their research. For example, on OSF Preprints, researchers can share papers prior to peer review to accelerate feedback and foster open exchange of ideas.
With the emergence of generative AI, it has become very easy to create content that looks like a real research paper, but is not genuine research.
This is a problem for journals, and especially for free services, like OSF, that lower barriers to openly sharing research. An increasing tide of AI-generated content could overwhelm these services and make it very difficult for readers to distinguish between legitimate and illegitimate content. While many researchers now use AI tools to improve clarity or summarize data, we are particularly concerned about submissions that appear to fabricate methods, results, or authorship altogether. This not only erodes trust in the reliability of what is openly shared, but also makes it harder for researchers and the public to navigate open platforms and identify credible work.
On the OSF, we’re seeing a noticeable rise in submitted papers that appear to be generated or heavily assisted by AI tools. While OSF does not perform editorial review or assess the quality of content, we do moderate for spam and other clear violations of platform use. Our current systems don’t yet include AI content detection, but this trend is prompting us to evaluate how our infrastructure can better support trustworthy open research by deterring misuse and helping researchers share AI-assisted work in transparent and responsible ways.
We are actively analyzing patterns in AI-generated preprint submissions to better understand the scope of the issue and its potential impact. We are considering a range of options, including new ways to detect unusual user behaviors, adding new steps to the submission process to deter low-quality content, developing new content policies, or adjusting how and when content becomes publicly visible. Each approach comes with tradeoffs in terms of effectiveness, transparency, and the resources needed to implement and sustain them. At the same time, we are engaging with other services and publishers to share insights, learn from existing or emerging mitigation strategies, and explore opportunities to collaborate on shared infrastructure and policy development. Finally, we are exploring opportunities to gain resources to develop innovative solutions that maintain low friction for sharing legitimate content and increase friction for illegitimate content. Effective technical, workflow, and social innovations could have an enormous positive impact across the community of infrastructures supporting open research. These efforts will help us ensure that OSF continues to support trustworthy research practice as new challenges emerge.
AI presents both opportunities and challenges for open science. When used responsibly, it can support open models and data, accelerate discovery, and aid in the evaluation of research. But it can also undermine credibility when used to plagiarize, fabricate findings, or mislead readers.
OSF was designed to make open scholarship possible and easy to adopt, while upholding the values of transparency, credibility, and accountability. As we navigate the evolving role of AI in research, we remain focused on strengthening the infrastructure that supports responsible research sharing and helping communities embed open practices that foster rigor and trust. We’ll continue to share updates as our approach develops and welcome input and collaboration. If you are encountering similar challenges at your organization—or have ideas for responsible AI use in open research—we invite you to share your feedback.
6218 Georgia Avenue NW, Suite #1, Unit 3189
Washington, DC 20011
Email: contact@cos.io
Unless otherwise noted, this site is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License.
Responsible stewards of your support
COS has earned top recognition from Charity Navigator and Candid (formerly GuideStar) for our financial transparency and accountability to our mission. COS and the OSF were also awarded SOC2 accreditation in 2023 after an independent assessment of our security and procedures by the American Institute of CPAs (AICPA).
We invite all of our sponsors, partners, and members of the community to learn more about how our organization operates, our impact, our financial performance, and our nonprofit status.