Offline Planning and Online Learning Under Recovering Rewards

Author:

Simchi-Levi David123ORCID,Zheng Zeyu4ORCID,Zhu Feng1ORCID

Affiliation:

1. Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139;

2. Department of Civil and Environmental Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139;

3. Operations Research Center, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139;

4. Department of Industrial Engineering and Operations Research, University of California, Berkeley, Berkeley, California 94709

Abstract

Motivated by emerging applications, such as live-streaming e-commerce, promotions, and recommendations, we introduce and solve a general class of nonstationary multi-armed bandit problems that have the following two features: (i) the decision maker can pull and collect rewards from up to [Formula: see text] out of N different arms in each time period and (ii) the expected reward of an arm immediately drops after it is pulled and then nonparametrically recovers as the arm’s idle time increases. With the objective of maximizing the expected cumulative reward over T time periods, we design a class of purely periodic policies that jointly set a period to pull each arm. For the proposed policies, we prove performance guarantees for both the offline and the online problems. For the offline problem when all model parameters are known, the proposed periodic policy obtains a long-run approximation ratio that is at the order of [Formula: see text], which is asymptotically optimal when K grows to infinity. For the online problem when the model parameters are unknown and need to be dynamically learned, we integrate the offline periodic policy with the upper confidence bound procedure to construct on online policy. The proposed online policy is proved to approximately have [Formula: see text] regret against the offline benchmark. Our framework and policy design may shed light on broader offline planning and online learning applications with nonstationary and recovering rewards. This paper was accepted by J. George Shanthikumar, data science. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2021.04202 .

Publisher

Institute for Operations Research and the Management Sciences (INFORMS)

Reference15 articles.

1. Enberg J (2021) How important will livestreaming be for social commerce in 2021? Marketer (July 01), https://www.emarketer.com/content/how-important-will-livestreaming-social-commerce-2021.

2. Bandit Processes and Dynamic Allocation Indices

3. Online Assortment Optimization with Reusable Resources

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3