Affiliation:
1. Software Institute, USI Università della Svizzera italiana, Lugano, Switzerland
2. University of Gothenburg, Sweden and Stellenbosch Institute for Advanced Study (STIAS), South Africa
3. Chalmers and University of Gothenburg, Göteborg, Sweden
Abstract
Statistical analysis is the tool of choice to turn data into information and then information into empirical knowledge. However, the process that goes from data to knowledge is long, uncertain, and riddled with pitfalls. To be valid, it should be supported by detailed, rigorous guidelines that help ferret out issues with the data or model and lead to qualified results that strike a reasonable balance between generality and practical relevance. Such guidelines are being developed by statisticians to support the latest techniques for
Bayesian
data analysis. In this article, we frame these guidelines in a way that is apt to empirical research in software engineering.
To demonstrate the guidelines in practice, we apply them to reanalyze a GitHub dataset about code quality in different programming languages. The dataset’s original analysis [Ray et al.
55
] and a critical reanalysis [Berger et al.
6
] have attracted considerable attention—in no small part because they target a topic (the impact of different programming languages) on which strong opinions abound. The goals of our reanalysis are largely orthogonal to this previous work, as we are concerned with demonstrating, on data in an interesting domain, how to build a principled Bayesian data analysis and to showcase its benefits. In the process, we will also shed light on some critical aspects of the analyzed data and of the relationship between programming languages and code quality—such as the impact of project-specific characteristics other than the used programming language.
The high-level conclusions of our exercise will be that Bayesian statistical techniques can be applied to analyze software engineering data in a way that is principled, flexible, and leads to convincing results that inform the state-of-the-art while highlighting the boundaries of its validity. The guidelines can support building solid statistical analyses and connecting their results. Thus, they can help buttress continued progress in empirical software engineering research.
Publisher
Association for Computing Machinery (ACM)
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Not all requirements prioritization criteria are equal at all times: A quantitative analysis;Journal of Systems and Software;2024-03
2. Dynamic Prediction of Delays in Software Projects using Delay Patterns and Bayesian Modeling;Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering;2023-11-30
3. A study of documentation for software architecture;Empirical Software Engineering;2023-09
4. Performance Analysis with Bayesian Inference;2023 IEEE/ACM 45th International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER);2023-05
5. Autonomy Is An Acquired Taste: Exploring Developer Preferences for GitHub Bots;2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE);2023-05