In this blog, I highlight a valid approach for calculating power after estimationâoften called retrospective power. I provide a Shiny App that lets readers explore how the method works and how it avoids the pitfalls of âobserved powerâ â try it out for yourself! I also link to a webpage where readers can enter any estimate, along with its standard error and degrees of freedom, to calculate the corresponding power.
[This blog is a repost of a blog that first appeared at davidroodman.com. It is republished here with permission from the author.]
My employer, Open Philanthropy, strives to make grants in light of evidence. Of course, many uncertainties in our decision-making are irreducible. No amount of thumbing through peer-reviewed journals will tell us how great a threat AI will pose decades hence, or whether a group we fund will get a vaccine to market or a bill to the governorâs desk.
[*AoI = âArticles of Interestâ is a feature of TRN where we report abstracts of recent research related to replication and research integrity.]
NOTE: The article is behind a firewall.
ABSTRACT (taken from the article)
âMost empirical research articles feature a single primary analysis that is conducted by the authors. However, different analysis teams usually adopt different analytical approaches and frequently reach varied conclusions.
[*AoI = âArticles of Interestâ is a feature of TRN where we report abstracts of recent research related to replication and research integrity.]
ABSTRACT (taken from the article)
âWe use a rigorous three-stage many-analysts design to assess how different researcher decisionsâspecifically data cleaning, research design, and the interpretation of a policy questionâaffect the variation in estimated treatment effects.
[*AoI = âArticles of Interestâ is a feature of TRN where we report abstracts of recent research related to replication and research integrity.]
ABSTRACT (taken from the article)
âWe [implemented] a large-scale empirical exploration of the variation in effect sizes and model predictions generated by the analytical decisions of different researchers in ecology and evolutionary biology.
[*AoI = âArticles of Interestâ is a feature of TRN where we report abstracts of recent research related to replication and research integrity.]
ABSTRACT (taken from the article)
âThis paper is a study of the decisions that researchers take during the execution of a research plan: their researcher discretion. Flexible research methods are generally seen as undesirable, and many methodologists urge to eliminate these so-called âresearcher degrees of freedomâ from the research practice.
[*AoI = âArticles of Interestâ is a feature of TRN where we report abstracts of recent research related to replication and research integrity.]
ABSTRACT (taken from the article)
âKnowledge on Open Science Practices (OSP) has been promoted through responsible conduct of research training and the development of open science infrastructure to combat Irresponsible Research Behavior (IRB).
Efforts to teach, collect, curate, and guide replication research are culminating in the new diamond open access journal Replication Research, which will launch in late 2025. The Framework for Open and Reproducible Research Training (FORRT; forrt.org) and the MĂŒnster Center for Open Science have spearheaded several initiatives to bolster replication research across various disciplines.
[*AoI = âArticles of Interestâ is a feature of TRN where we report abstracts of recent research related to replication and research integrity.]
ABSTRACT (taken from the article)
âThis study evaluates the effectiveness of varying levels of human and artificial intelligence (AI) integration in reproducibility assessments of quantitative social science research.
[*AoI = âArticles of Interestâ is a feature of TRN where we report abstracts of recent research related to replication and research integrity.]
ABSTRACT (taken from the article)
âExperimental asset markets provide a controlled approach to studying financial markets. We attempt to replicate 17 key results from four prominent studies, collecting new data from 166 markets with 1,544 participants.
Cite
Help us improve the FORRT website
We would be grateful if you could complete this survey.
Your feedback will directly inform improvements to navigation, accessibility, and content structure. Note:All answers are anonymous and will help us make the website better for everyone!