Enhancing Algorithm Selection through Comprehensive Performance Evaluation: Statistical Analysis of Stochastic Algorithms
-
Published:2023-11-16
Issue:11
Volume:11
Page:231
-
ISSN:2079-3197
-
Container-title:Computation
-
language:en
-
Short-container-title:Computation
Author:
Amin Azad Arif Hama1ORCID, Aladdin Aso M.2ORCID, Hasan Dler O.2ORCID, Mohammed-Taha Soran R.2, Rashid Tarik A.3ORCID
Affiliation:
1. Department of Financial Accounting and Auditing, College of Commerce, University of Sulaimani, Sulaymaniyah 46001, Iraq 2. Computer Science Department, College of Science, Charmo University, Kurdistan Region, Chamchamal 46023, Iraq 3. Computer Science and Engineering Department, University of Kurdistan Hewler, Erbil 44001, Iraq
Abstract
Analyzing stochastic algorithms for comprehensive performance and comparison across diverse contexts is essential. By evaluating and adjusting algorithm effectiveness across a wide spectrum of test functions, including both classical benchmarks and CEC-C06 2019 conference functions, distinct patterns of performance emerge. In specific situations, underscoring the importance of choosing algorithms contextually. Additionally, researchers have encountered a critical issue by employing a statistical model randomly to determine significance values without conducting other studies to select a specific model for evaluating performance outcomes. To address this concern, this study employs rigorous statistical testing to underscore substantial performance variations between pairs of algorithms, thereby emphasizing the pivotal role of statistical significance in comparative analysis. It also yields valuable insights into the suitability of algorithms for various optimization challenges, providing professionals with information to make informed decisions. This is achieved by pinpointing algorithm pairs with favorable statistical distributions, facilitating practical algorithm selection. The study encompasses multiple nonparametric statistical hypothesis models, such as the Wilcoxon rank-sum test, single-factor analysis, and two-factor ANOVA tests. This thorough evaluation enhances our grasp of algorithm performance across various evaluation criteria. Notably, the research addresses discrepancies in previous statistical test findings in algorithm comparisons, enhancing result reliability in the later research. The results proved that there are differences in significance results, as seen in examples like Leo versus the FDO, the DA versus the WOA, and so on. It highlights the need to tailor test models to specific scenarios, as p-value outcomes differ among various tests within the same algorithm pair.
Subject
Applied Mathematics,Modeling and Simulation,General Computer Science,Theoretical Computer Science
Reference63 articles.
1. Whatever next? Predictive Brains, Situated Agents, and the Future of Cognitive Science;Clark;Behav. Brain Sci.,2013 2. Kapur, R. (2018). Research Methodology: Methods and Strategies, Department of Adult Education and Continuing Extension, University of Delhi. 3. Horn, R.V. (1993). Statistical Indicators: For the Economic and Social Sciences, Cambridge University Press. 4. Li, B., Su, P., Chabbi, M., Jiao, S., and Liu, X. (March, January 25). DJXPerf: Identifying Memory Inefficiencies via Object-Centric Profiling for Java. Proceedings of the 21st ACM/IEEE International Symposium on Code Generation and Optimization, Montréal, QC, Canada. 5. Li, B., Xu, H., Zhao, Q., Su, P., Chabbi, M., Jiao, S., and Liu, X. (2022, January 21–29). OJXPerf: Featherlight Object Replica Detection for Java Programs. Proceedings of the 44th International Conference on Software Engineering, Pittsburgh, PA, USA.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|