Recognition: unknown
Significance level and positivity bias as causes for high rate of non-reproducible scientific results?
read the original abstract
The high fraction of published results that turn out to be incorrect is a major concern of today's science. This paper contributes to the understanding of this problem in two independent directions. First, Johnson's recent claim that hypothesis testing with a significance level of 0.05 can alone lead to an unacceptably large proportion of false positives among all results is shown to be unfounded. Second, a way to quantify the effect of "positivity bias" (the tendency to consider only positive results as worthwhile) is introduced. We estimate the proportion of false positives among positive results in terms of the significance level used and the positivity ratio. The latter quantity is the fraction of positive results over all results, be they positive or not, published or not. In particular, if one uses a significance level of 0.05, and produces 4 (possibly unpublished) negative results for every positive result, then the proportion of false positives among positive results can climb to a high 21%.
This paper has not been read by Pith yet.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.