Transfer Reinforcement Learning for Combinatorial Optimization Problems

Author:

Souza Gleice Kelly Barbosa1ORCID,Santos Samara Oliveira Silva2ORCID,Ottoni André Luiz Carvalho1ORCID,Oliveira Marcos Santos3ORCID,Oliveira Daniela Carine Ramires3ORCID,Nepomuceno Erivelton Geraldo4ORCID

Affiliation:

1. Technologic and Exact Center, Federal University of Recôncavo da Bahia, R. Rui Barbosa, Cruz das Almas 44380-000, Bahia, Brazil

2. Hamilton Institute, Maynooth University, W23VP22 Maynooth, Co. Kildare, Ireland

3. Department of Mathematics and Statistics, Federal University of São João del-Rei, Praça Frei Orlando, São João del Rei 36309-034, Minas Gerais, Brazil

4. Centre for Ocean Energy Research, Department of Electronic Engineering, Maynooth University, W23VP22 Maynooth, Co. Kildare, Ireland

Abstract

Reinforcement learning is an important technique in various fields, particularly in automated machine learning for reinforcement learning (AutoRL). The integration of transfer learning (TL) with AutoRL in combinatorial optimization is an area that requires further research. This paper employs both AutoRL and TL to effectively tackle combinatorial optimization challenges, specifically the asymmetric traveling salesman problem (ATSP) and the sequential ordering problem (SOP). A statistical analysis was conducted to assess the impact of TL on the aforementioned problems. Furthermore, the Auto_TL_RL algorithm was introduced as a novel contribution, combining the AutoRL and TL methodologies. Empirical findings strongly support the effectiveness of this integration, resulting in solutions that were significantly more efficient than conventional techniques, with an 85.7% improvement in the preliminary analysis results. Additionally, the computational time was reduced in 13 instances (i.e., in 92.8% of the simulated problems). The TL-integrated model outperformed the optimal benchmarks, demonstrating its superior convergence. The Auto_TL_RL algorithm design allows for smooth transitions between the ATSP and SOP domains. In a comprehensive evaluation, Auto_TL_RL significantly outperformed traditional methodologies in 78% of the instances analyzed.

Funder

Science Foundation Ireland

Brazilian Research Agencies: CNPq/INERGE

CNPq

FAPEMIG

Publisher

MDPI AG

Reference80 articles.

1. Hierarchical reinforcement learning for efficient and effective automated penetration testing of large networks;Ghanem;J. Intell. Inf. Syst.,2023

2. Technical note Q-learning;Watkins;Mach. Learn.,1992

3. Russell, S.J., and Norving, P. (2013). Artificial Intelligence, Pearson. [3rd ed.].

4. Sutton, R., and Barto, A. (2018). Reinforcement Learning: An Introduction, MIT Press. [2nd ed.].

5. Reinforcement learning for demand response: A review of algorithms and modeling techniques;Nagy;Appl. Energy,2019

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3