Sample size calculations for the experimental comparison of multiple algorithms on multiple problem instances

Author:

Campelo FelipeORCID,Wanner Elizabeth F.ORCID

Abstract

AbstractThis work presents a statistically principled method for estimating the required number of instances in the experimental comparison of multiple algorithms on a given problem class of interest. This approach generalises earlier results by allowing researchers to design experiments based on the desired best, worst, mean or median-case statistical power to detect differences between algorithms larger than a certain threshold. Holm’s step-down procedure is used to maintain the overall significance level controlled at desired levels, without resulting in overly conservative experiments. This paper also presents an approach for sampling each algorithm on each instance, based on optimal sample size ratios that minimise the total required number of runs subject to a desired accuracy in the estimation of paired differences. A case study investigating the effect of 21 variants of a custom-tailored Simulated Annealing for a class of scheduling problems is used to illustrate the application of the proposed methods for sample size calculations in the experimental comparison of algorithms.

Funder

Conselho Nacional de Desenvolvimento Científico e Tecnológico

Fundação de Amparo à Pesquisa do Estado de Minas Gerais

Leverhulme Trust

Publisher

Springer Science and Business Media LLC

Subject

Artificial Intelligence,Management Science and Operations Research,Control and Optimization,Computer Networks and Communications,Information Systems,Software

Reference60 articles.

1. Barr, R.S., Golden, B.L., Kelly, J.P., Resende, M.G.C., Stewart, W.R.: Designing and reporting on computational experiments with heuristic methods. J. Heuristics 1(1), 9–32 (1995)

2. Bartroff, J., Lai, T., Shih, M.C.: Sequential Experimentation in Clinical Trials: Design and Analysis. Springer, New York (2013)

3. Bartz-Beielstein, T.: New Experimentalism Applied to Evolutionary Computation. Ph.D. thesis, Universität Dortmund, Germany (2005)

4. Bartz-Beielstein, T.: Experimental Research in Evolutionary Computation. Springer, New York (2006)

5. Bartz-Beielstein, T.: How to create generalizable results. In: Kacprzyk, J., Pedrycz, W. (eds.) Handbook of Computational Intelligence. Springer, New York (2015)

Cited by 8 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. A Literature Review and Critical Analysis of Metaheuristics Recently Developed;Archives of Computational Methods in Engineering;2023-07-22

2. Lessons from the Evolutionary Computation Bestiary;Artificial Life;2023

3. Component-wise analysis of automatically designed multiobjective algorithms on constrained problems;Proceedings of the Genetic and Evolutionary Computation Conference;2022-07-08

4. Analyzing the impact of undersampling on the benchmarking and configuration of evolutionary algorithms;Proceedings of the Genetic and Evolutionary Computation Conference;2022-07-08

5. An Improved Intelligent Auction Mechanism for Emergency Material Delivery;Mathematics;2022-06-23

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3