[*AoI = “Articles of Interest” is a feature of TRN where we report abstracts of recent research related to replication and research integrity.]
ABSTRACT (taken from*** the article***)
“A scientific discovery in empirical research, e.g., establishing a causal relationship between two variables, is typically based on rejecting a statistical null hypothesis of no relationship.
[Note: This blog is based on our articles “Blinding Us to the Obvious? The Effect of Statistical Training on the Evaluation of Evidence” (Management Science, 2016) and “Statistical Significance and the Dichotomization of Evidence” (Journal of the American Statistical Association, 2017).] Introduction The null hypothesis significance testing (NHST) paradigm is the dominant statistical paradigm in the biomedical and social sciences.
[NOTE: This is a repost of a blog that Andrew Gelman wrote for the blogsite Statistical Modeling, Causal Inference, and Social Science]. Blake McShane and David Gal recently wrote two articles (“ Blinding us to the obvious? The effect of statistical training on the evaluation of evidence” and “Statistical significance and the dichotomization of evidence”) on the misunderstandings of p-values that are common even among supposed experts in statistics and applied social research.
NOTE: This entry is based on the article, “There’s More Than One Way to Conduct a Replication Study: Beyond Statistical Significance” (Psychological Methods, 2016, Vol, 21, No. 1, 1-12) Following a large-scale replication project in economics (Chang & Li, 2015) that successfully replicated only a third of 67 studies, a recent headline boldly reads, “The replication crisis has engulfed economics” (Ortman, 2015).