Open science practices are embedded throughout Amélie Godefroidt’s research, which explores public opinion during and after wars, civil conflicts, and terrorist attacks. As a Postdoctoral Researcher and Lecturer at the KU Leuven Centre for Research on Peace and Development in Belgium, she regularly engages with ethically and methodologically complex material and highly sensitive data, and recognizes that such work requires carefully balancing transparency with confidentiality.
Influenced by early positive experiences with pre-analysis plans, Godefroidt is an ardent open science advocate and is committed to sharing how open research practices can create a more robust evidence base, build cumulative knowledge, and support the interpretation of politically consequential findings. As she emphasizes in her teaching, open science—when implemented thoughtfully and responsibly—can strengthen rigor and trust without compromising sensitive information.
At Ghent University, Godefroidt recently designed and led the Open Science in the Social Sciences summer program, which combined hands-on training with candid discussions about the challenges and opportunities of open research practices. In keeping with her open science advocacy, she made the course materials publicly available on OSF.
In this Q&A, Godefroidt shares what inspired her to create the course, the benefits of using the Open Science Framework (OSF) to organize and share research projects, and how open science adoption can strengthen trust and collaboration in the social sciences and beyond.
Q: Can you share a bit about your research focus and interests, and what initially drew you to open science practices?
A: My research focuses on public opinion formation during and after episodes of political violence. I study how people respond—both psychologically and politically—to events such as interstate war, civil conflict, and terrorism, using experiments, panel data, and mixed-method designs. Much of my work is comparative and cross-national, spanning countries such as Azerbaijan, Northern Ireland, Nigeria, Guatemala, Ukraine, and Belgium.
What initially drew me to open science was a strong belief in the fundamental value of science as a public good—and, quite frankly, a growing curiosity during my PhD about why some of my findings didn’t replicate earlier results. Early in my doctoral journey, I took part in (and won!) a pre-analysis plan challenge at an Open Social Science Conference in Europe. That experience, and the conference as a whole, opened my eyes to the transformative potential of open science—not just for improving transparency and reproducibility, but also strengthening trust in scientific findings and broadening access to knowledge. Since then, I’ve integrated open science principles throughout my research process—from preregistration and transparent reporting to code and data sharing—and I strive to pass these values on by teaching open science practices to the next generation of scholars.
Q: What inspired you to create the Open Science in the Social Sciences course, and how was it structured? What were the key learning outcomes you aimed to achieve?
A: The idea for the course actually started with an enthusiastic PhD student, Lisa Janssen (look her up if you’re interested in excellent work on polarization!), who reached out to ask if I’d be willing to teach a course on open science. Of course, I immediately said yes! I saw it as a great opportunity to support early-career researchers in navigating the often overwhelming landscape of open science, and to help make these practices more concrete and useful in their own work.
The course ran over four days and blended theoretical insights with hands-on application. Each day focused on a different pillar of open science:
Day 1 introduced the replication crisis and unpacked its causes, using real-world case studies and critical readings to illustrate the urgency of more transparent and reproducible research.
Day 2 focused on preregistration and pre-analysis plans, exploring their rationale, key components, and the various formats available—featuring, of course, the OSF. We paid special attention to how preregistration can be adapted to different research designs including meta-analyses, longitudinal studies, secondary data analyses, and qualitative research—showing that open science is not one-size-fits-all.
Day 3 was about reproducibility and replicability, with a strong focus on how to create good replication packages. We explored a range of tools to support reproducible research workflows, from consistency checks to reproducible reporting tools. A highlight of Day 3 was also our deep dive into power analyses, another key component of reproducibility.
Day 4 fully embraced the spirit of open science and open education by letting participants choose the topics. They voted for an introduction to multiverse analysis, a hands-on session on rMarkdown, more info about open data vs. privacy concerns, and a primer on open peer review. The multiverse analysis session, in particular, was a hit! (For a more detailed day-by-day recap, check out my BlueSky.)
Throughout the course, I aimed not just to teach the “what” or “why” of open science, but also the “how.” I genuinely hope participants left with concrete tools and confidence to apply open science practices in their own research. More than anything, I wanted them to feel equipped to take their first (or next) steps toward conducting research that is transparent, rigorous, and truly collaborative.
Q: These course materials are now publicly available on OSF. What motivated you to take this approach? What has been the overall response?
A: Haha—that’s an easy one! You can’t teach an Open Science course and then keep your materials behind closed doors, can you?
But more seriously, sharing the course materials openly felt like the natural and necessary thing to do. One of the course’s core messages is that transparency and accessibility strengthen the scientific community. Making the materials publicly available on OSF aligns with that principle and allows others—whether they’re teaching similar courses, organizing workshops, or just getting started with open science—to build on what we’ve done.
The response has been really positive! I received enthusiastic feedback from participants, and I’m now in conversation with FORRT (Framework for Open and Reproducible Research Training) to explore ways to make the course more accessible and visible. I’m also discussing the possibility of offering it again at other institutions.
My overall hope is that others will find the materials useful and feel inspired to adapt the syllabus for their own teaching. It would be incredibly rewarding to see the course contribute to a broader culture of openness—not just among students, but across institutions. So if you’re reading this and thinking about running something similar: please don’t hesitate to reach out! I’m always happy to teach the course at other institutions or to support anyone who wants to use or build on the materials. After all, that’s exactly what open science is about.
Q: How has using the OSF supported your work, both in your research and in developing this course?
A: It has been incredibly useful. OSF has become my central hub for organizing research projects—from preregistration and storing data and code to sharing outputs with collaborators and making my work publicly accessible. I also regularly use OSF as a source of inspiration to improve my pre-analysis plans. Pro tip: look up a preregistration from a colleague whose work you admire. You can learn a lot from seeing how others structure their research transparently and rigorously.
Beyond research, one of the great things about open science is that educational resources on open science are often open, too. As I was preparing this course, I found myself overwhelmed, in the best possible way, by the sheer amount of high-quality material out there.
A special shout-out to FORRT, which has been an amazing source of inspiration. Their curated resources are a goldmine for anyone teaching or learning about open and reproducible science. Finally, I also drew inspiration from several Open Science Days I attended at my institution, KU Leuven, which have been consistently engaging and thought-provoking.
Q: How do you see open science practices benefiting research in fields like yours? Are there unique challenges or opportunities in these domains?
A: Mhmm, that’s an interesting question! These are fields where open science can make a real difference, but also where unique challenges arise. Research on political violence, terrorism, and post-conflict resolution often involves high-stakes, emotionally charged topics, with significant ethical and methodological complexity. Open science practices can help by bringing more transparency to how we design studies, handle sensitive data, and interpret politically consequential findings.
At the same time, we face real challenges: data are often sensitive, access can be limited, and full transparency may endanger participants or field collaborators. I think it’s important to stress here that open science isn’t about sharing everything. It’s about being as transparent as ethically and practically possible. Tools like detailed metadata, synthetic data, and controlled-access repositories can help strike that balance.
Finally, I see huge opportunities to build cumulative knowledge through replication and reanalysis in my field of study—especially replications across national and cultural contexts. Too often, studies in these fields are one-off efforts or studies on terrorism are still largely U.S-based. Open science practices can help us build more robust, comparative evidence bases that move beyond single cases. This is crucial for understanding what works in complex, high-risk environments.
Q: What advice do you have for other researchers, in your field or otherwise, who may be hesitant about adopting open science practices?
First, as I also said to the participants of the summer school: Start small and dare to make mistakes. Open science doesn’t have to be all or nothing. You don’t need to transform your entire workflow overnight. Even a single step—like sharing your code, preregistering the hypotheses of one study, or clearly labeling exploratory analyses in your paper—can already improve the transparency and credibility of your work.
Also, I don’t think I’ve ever written a preregistration without making a mistake, mis-specifying something, or forgetting to include an important detail—and that’s completely normal! No one is perfect. What matters is how you handle it. For example, I always include a “Deviations from the Preregistration” section in my appendix to transparently document any changes.
And yes, yes—relax. You can still have fun and explore your data! Just be open about it. Curiosity isn't a crime, but let’s not pretend our post-hoc findings were part of some grand master plan all along. Again, transparency is the name of the game.
Finally, for researchers in sensitive or complex fields like mine—hesitations are completely understandable. But as said before, open science is not about sharing everything at all costs. It’s about being as open as you responsibly can. That may mean sharing aggregated data instead of raw data, or posting a detailed description of your sampling and analysis strategy even if you can’t disclose your full dataset.
To me, the most compelling benefit is that open science increases both trust in your findings and in the broader research community. It also makes your work more discoverable and reusable by scholars and policymakers (!). And perhaps most importantly, for me, it makes science feel more like a shared, collaborative endeavor—less about guarding personal ideas and outputs, and more about building knowledge together.