Bandits atop Reinforcement Learning: Tackling Online Inventory Models with Cyclic Demands

Author:

Gong Xiao-Yue1ORCID,Simchi-Levi David2ORCID

Affiliation:

1. Tepper School of Business, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213;

2. Institute for Data, Systems, and Society, Department of Civil and Environmental Engineering and Operations Research Center, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139

Abstract

Motivated by a long-standing gap between inventory theory and practice, we study online inventory models with unknown cyclic demand distributions. We design provably efficient reinforcement learning (RL) algorithms that leverage the structure of inventory problems to achieve optimal theoretical guarantees that surpass existing results. We apply the standard performance measure in online learning literature, regret, which is defined as the difference between the total expected cost of our policy and the total expected cost of the clairvoyant optimal policy that has full knowledge of the demand distributions a priori. This paper analyzes, in the presence of unknown cyclic demands, both the lost-sales model with zero lead time and the multiproduct backlogging model with positive lead times, fixed joint-ordering costs and order limits. For both models, we first introduce episodic models where inventory is discarded at the end of every cycle, and then build upon these results to analyze the nondiscarding models. Our RL policies HQL and FQL achieve [Formula: see text] regret for the episodic lost-sales model and the episodic multiproduct backlogging model, matching the regret lower bound that we prove in this paper. For the nondiscarding models, we construct a bandit learning algorithm on top that governs multiple copies of the previous RL algorithms, named Meta-HQL. Meta-HQL achieves [Formula: see text] regret for the nondiscarding lost-sales model with zero lead time, again matching the regret lower bound. For the nondiscarding multiproduct backlogging model, our policy Mimic-QL achieves [Formula: see text] regret. Our policies remove the regret dependence on the cardinality of the state-action space for inventory problems, which is an improvement over existing RL algorithms. We conducted experiments with a real sales data set from Rossmann, one of the largest drugstore chains in Europe, and also with a synthetic data set. For both sets of experiments, our policy converges rapidly to the optimal policy and dramatically outperforms the best policy that models demand as independent and identically distributed instead of cyclic. This paper was accepted by J. George Shanthikumar, data science. Funding: X.-Y. Gong was partially supported by an Accenture Fellowship. The work of X.-Y. Gong and D. Simchi-Levi was partially supported by the MIT Data Science Lab. Supplemental Material: The data and online appendices are available at https://doi.org/10.1287/mnsc.2023.4947 .

Publisher

Institute for Operations Research and the Management Sciences (INFORMS)

Subject

Management Science and Operations Research,Strategy and Management

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Online Learning for Dual-Index Policies in Dual-Sourcing Systems;Manufacturing & Service Operations Management;2023-12-12

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3