Policy Learning with Adaptively Collected Data

Author:

Zhan Ruohan1ORCID,Ren Zhimei2ORCID,Athey Susan3ORCID,Zhou Zhengyuan4ORCID

Affiliation:

1. Department of Industrial Engineering and Decision Analytics, Hong Kong University of Science and Technology, Hong Kong;

2. Department of Statistics and Data Science, Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania 19104;

3. Graduate School of Business, Stanford University, Stanford, California 94305;

4. Stern School of Business, New York University, New York, New York 10012

Abstract

In a wide variety of applications, including healthcare, bidding in first price auctions, digital recommendations, and online education, it can be beneficial to learn a policy that assigns treatments to individuals based on their characteristics. The growing policy-learning literature focuses on settings in which policies are learned from historical data in which the treatment assignment rule is fixed throughout the data-collection period. However, adaptive data collection is becoming more common in practice from two primary sources: (1) data collected from adaptive experiments that are designed to improve inferential efficiency and (2) data collected from production systems that progressively evolve an operational policy to improve performance over time (e.g., contextual bandits). Yet adaptivity complicates the problem of learning an optimal policy ex post for two reasons: first, samples are dependent and, second, an adaptive assignment rule may not assign each treatment to each type of individual sufficiently often. In this paper, we address these challenges. We propose an algorithm based on generalized augmented inverse propensity weighted (AIPW) estimators, which nonuniformly reweight the elements of a standard AIPW estimator to control worst case estimation variance. We establish a finite-sample regret upper bound for our algorithm and complement it with a regret lower bound that quantifies the fundamental difficulty of policy learning with adaptive data. When equipped with the best weighting scheme, our algorithm achieves minimax rate-optimal regret guarantees even with diminishing exploration. Finally, we demonstrate our algorithm’s effectiveness using both synthetic data and public benchmark data sets. This paper was accepted by Hamid Nazerzadeh, data science. Funding: This work is supported by the National Science Foundation [Grant CCF-2106508]. R. Zhan was supported by Golub Capital and the Michael Yao and Sara Keying Dai AI and Digital Technology Fund. Z. Ren was supported by the Office of Naval Research [Grant N00014-20-1-2337]. S. Athey was supported by the Office of Naval Research [Grant N00014-19-1-2468]. Z. Zhou is generously supported by the New York University’s 2022–2023 Center for Global Economy and Business faculty research grant and the Digital Twin research grant from Bain & Company. Supplemental Material: The data files are available at https://doi.org/10.1287/mnsc.2023.4921 .

Publisher

Institute for Operations Research and the Management Sciences (INFORMS)

Subject

Management Science and Operations Research,Strategy and Management

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3