Affiliation:
1. University of Catania, IT
Abstract
The present is an introductory summary on the topic of misinformative and fraudolent statistical inferences, in the light of recent attempts to reform social sciences. The manuscript is focused is on the concept of replicability, that is the likelihood of a scientific result to be reached by two independent sources. Replication studies are often ignored and most of the scientific interest regards papers presenting theoretical novelties. As a result, replicability happens to be uncorrelated with bibliometric performances. These often reflect only the popularity of a theory, but not its validity. These topics are illustrated via two case studies of very popular theories. Statistical errors and bad practices are discussed. The consequences of the practice of omitting inconclusive results from a paper, or 'p-hacking', are discussed. Among the remedies, the practice of preregistration is presented, along with attempts to reform peer review through it. As a tool to measure the sensitivity of a scientific theory to misinformation and disinformation, multiversal theory and methods are discussed.
Publisher
Firenze University Press and Genova University Press
Reference35 articles.
1. Brembs, B. (2018). Prestigious Science Journals Struggle to Reach Even Average Reliability. Frontiers in Human Neuroscience, 12.
2. Breznau, N. et al. (2022). Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty. Proceedings of the National Academy of Sciences, 119(44):e2203150119.
3. Camerer, C. F. et al. (2018). Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour, 2(9):637–644.
4. Feigenbaum, S. and Levy, D. M. (1996). The technological obsolescence of scientific fraud. Rationality and Society, 8(3):261–276.
5. Gelman, A. and Loken, E. (2014). The statistical crisis in science. American Scientist, 102(6):460–466.