Replication researchers cite inflated effect sizes as a major cause of replication failure. It turns out this is an inevitable consequence of significance testing. The reason is simple. The p-value you get from a study depends on the observed effect size, with more extreme observed effect sizes giving better p-values; the true effect size plays no role.
Replication is an important topic in economic research or any social science for that matter. This issue is most important when an analysis is undertaken to inform decisions by policymakers. Drawing inferences from null or insignificant finding is particularly problematic because it is often unclear when “not significant” can be interpreted as “no effect.
Cite
Help us improve the FORRT website
We would be grateful if you could complete this survey.
Your feedback will directly inform improvements to navigation, accessibility, and content structure. Note:All answers are anonymous and will help us make the website better for everyone!