Affiliation:
1. UC Berkeley EECS, USA
2. UC Berkeley EECS and Statistics, USA
3. UC Berkeley Statistics, USA
Abstract
Large-scale, two-sided matching platforms must find market outcomes that align with user preferences while simultaneously learning these preferences from data. Classical notions of stability (Gale and Shapley, 1962; Shapley and Shubik, 1971) are, unfortunately, of limited value in the learning setting, given that preferences are inherently uncertain and destabilizing while they are being learned. To bridge this gap, we develop a framework and algorithms for learning stable market outcomes under uncertainty. Our primary setting is matching with transferable utilities, where the platform both matches agents and sets monetary transfers between them. We design an incentive-aware learning objective that captures the distance of a market outcome from equilibrium. Using this objective, we analyze the complexity of learning as a function of preference structure, casting learning as a stochastic multi-armed bandit problem. Algorithmically, we show that “optimism in the face of uncertainty,” the principle underlying many bandit algorithms, applies to a primal-dual formulation of matching with transfers and leads to near-optimal regret bounds. Our work takes a first step toward elucidating when and how stable matchings arise in large, data-driven marketplaces.
Funder
NSF Graduate Research Fellowship
Vannevar Bush Faculty Fellowship
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Hardware and Architecture,Information Systems,Control and Systems Engineering,Software
Reference44 articles.
1. Jacob D. Abernethy, Elad Hazan, and Alexander Rakhlin. 2008. Competing in the dark: An efficient algorithm for bandit linear optimization. In 21st Annual Conference on Learning Theory (COLT’08). Omnipress, 263–274.
2. On the non-existence of stable matches with incomplete information
3. Competing bandits: The perils of exploration under competition;Aridor Guy;CoRR,2020
4. Clearing Matching Markets Efficiently: Informative Signals and Match Recommendations
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Altruistic Bandit Learning For One-to-Many Matching Markets;Proceedings of the 2024 International Conference on Information Technology for Social Good;2024-09-04