Help us improve the FORRT website

Replication Network Blog

Welcome to the Replication Network Blog, a collection of guest posts, perspectives, and discussions on replication research, reproducibility, and open science practices.

Browse through our archive of articles covering topics including:

The blog features contributions from researchers, statisticians, and practitioners who share their insights and experiences with replication research across various disciplines.


Recent Blog Posts

REED: You Can Calculate Power Retrospectively — Just Don’t Use Observed Power

Tags: GUEST BLOGS Observed Power Post-hoc Power Retrospective Power SE-ES
In this blog, I highlight a valid approach for calculating power after estimation—often called retrospective power. I provide a Shiny App that lets readers explore how the method works and how it avoids the pitfalls of “observed power” — try it out for yourself! I also link to a webpage where readers can enter any estimate, along with its standard error and degrees of freedom, to calculate the corresponding power.

ROODMAN: Appeal to Me – First Trial of a “Replication Opinion”

Tags: GUEST BLOGS Academic incentives Comments economics Evidence-based policy Journal policy Meta-Science Open Philanthropy peer review replications Truth-seeking
[This blog is a repost of a blog that first appeared at davidroodman.com. It is republished here with permission from the author.] My employer, Open Philanthropy, strives to make grants in light of evidence. Of course, many uncertainties in our decision-making are irreducible. No amount of thumbing through peer-reviewed journals will tell us how great a threat AI will pose decades hence, or whether a group we fund will get a vaccine to market or a bill to the governor’s desk.

AoI*: “Introducing Synchronous Robustness Reports” by Bartos et al. (2025)

Tags: GUEST BLOGS FAIR data principles Journal policies Many-Analysts Approach Methodological diversity Publication workflow Robustness in scientific research TOP (Transparency and Openness Promotion) guidelines
[*AoI = “Articles of Interest” is a feature of TRN where we report abstracts of recent research related to replication and research integrity.] NOTE: The article is behind a firewall. ABSTRACT (taken from the article) “Most empirical research articles feature a single primary analysis that is conducted by the authors. However, different analysis teams usually adopt different analytical approaches and frequently reach varied conclusions.

AoI*: “The Sources of Researcher Variation in Economics” by Huntington-Klein et al. (2025)

Tags: GUEST BLOGS Causal Inference Data Cleaning Many-Analysts Approach Research design Researcher degrees of freedom Researcher Variation
[*AoI = “Articles of Interest” is a feature of TRN where we report abstracts of recent research related to replication and research integrity.] ABSTRACT (taken from the article) “We use a rigorous three-stage many-analysts design to assess how different researcher decisions—specifically data cleaning, research design, and the interpretation of a policy question—affect the variation in estimated treatment effects.

AoI*: “Same data, different analysts: variation in effect sizes due to analytical decisions in ecology and evolutionary biology” by Gould et al. (2025)

Tags: GUEST BLOGS Ecology and evolutionary biology Effect size variation Many-analyst study Meta-analysis replication crisis Reproducibility
[*AoI = “Articles of Interest” is a feature of TRN where we report abstracts of recent research related to replication and research integrity.] ABSTRACT (taken from the article) “We [implemented] a large-scale empirical exploration of the variation in effect sizes and model predictions generated by the analytical decisions of different researchers in ecology and evolutionary biology.

AoI*: “Decisions, Decisions, Decisions: An Ethnographic Study of Researcher Discretion in Practice” by van Drimmelen et al. (2024)

Tags: GUEST BLOGS Ethnographic study Pre-Analysis plans Research practice Researcher discretion
[*AoI = “Articles of Interest” is a feature of TRN where we report abstracts of recent research related to replication and research integrity.] ABSTRACT (taken from the article) “This paper is a study of the decisions that researchers take during the execution of a research plan: their researcher discretion. Flexible research methods are generally seen as undesirable, and many methodologists urge to eliminate these so-called ‘researcher degrees of freedom’ from the research practice.

AoI*: “Open minds, tied hands: Awareness, behavior, and reasoning on open science and irresponsible research behavior” by Wiradhany et al. (2025)

Tags: GUEST BLOGS Irresponsible Research Behavior (IRB) Open Science Practices (OSP)
[*AoI = “Articles of Interest” is a feature of TRN where we report abstracts of recent research related to replication and research integrity.] ABSTRACT (taken from the article) “Knowledge on Open Science Practices (OSP) has been promoted through responsible conduct of research training and the development of open science infrastructure to combat Irresponsible Research Behavior (IRB).

RÖSELER: Replication Research Symposium and Journal

Tags: GUEST BLOGS Annotator Educational materials Explorer FORRT Framework for Open and Reproducible Research Training Journal policies Replication Research journal Replication Research Symposium
Efforts to teach, collect, curate, and guide replication research are culminating in the new diamond open access journal Replication Research, which will launch in late 2025. The Framework for Open and Reproducible Research Training (FORRT; forrt.org) and the MĂŒnster Center for Open Science have spearheaded several initiatives to bolster replication research across various disciplines.

AoI*: “: Comparing Human-Only, AI-Assisted, and AI-Led Teams on Assessing Research Reproducibility in Quantitative Social Science” by Brodeur et al. (2025)

Tags: GUEST BLOGS AI AI-assisted research AI-led analysis Artificial Intelligence Human vs AI collaboration Quantitative social science Reproducibility assessment
[*AoI = “Articles of Interest” is a feature of TRN where we report abstracts of recent research related to replication and research integrity.] ABSTRACT (taken from the article) “This study evaluates the effectiveness of varying levels of human and artificial intelligence (AI) integration in reproducibility assessments of quantitative social science research.

AoI*: “Do experimental asset market results replicate? High powered preregistered replications of 17 claims” by Huber et al. (2024)

Tags: GUEST BLOGS Behavioral Economics Bubbles Cognitive Skills Experimental asset markets Gender replication
[*AoI = “Articles of Interest” is a feature of TRN where we report abstracts of recent research related to replication and research integrity.] ABSTRACT (taken from the article) “Experimental asset markets provide a controlled approach to studying financial markets. We attempt to replicate 17 key results from four prominent studies, collecting new data from 166 markets with 1,544 participants.

Help us improve the FORRT website

We would be grateful if you could complete this survey. Your feedback will directly inform improvements to navigation, accessibility, and content structure.
Note:All answers are anonymous and will help us make the website better for everyone!

Take the Survey