Conservative tests under satisficing models of publication bias.
Abstract
Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%—rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs.
Link to resource: https://doi.org/10.1371/journal.pone.0149590
Type of resources: Primary Source, Reading, Paper
Education level(s): College / Upper Division (Undergraduates)
Primary user(s): Student
Subject area(s): Applied Science, Social Science
Language(s): English