Affiliation:
1. Xidian University, XiAn, Shanxi, China
2. Guangxi Academy of Science, China and Tongji University, Shanghai, China
Abstract
Dynamic pricing plays an important role in solving the problems such as traffic load reduction, congestion control, and revenue improvement. Efficient dynamic pricing strategies can increase capacity utilization, total revenue of service providers, and the satisfaction of both passengers and drivers. Many proposed dynamic pricing technologies focus on short-term optimization and face poor scalability in modeling long-term goals for the limitations of solution optimality and prohibitive computation. In this article, a deep reinforcement learning framework is proposed to tackle the dynamic pricing problem for ride-hailing platforms. A soft actor-critic (SAC) algorithm is adopted in the reinforcement learning framework. First, the dynamic pricing problem is translated into a
Markov Decision Process (MDP)
and is set up in continuous action spaces, which is no need for the discretization of action space. Then, a new reward function is obtained by the order response rate and the KL-divergence between supply distribution and demand distribution. Experiments and case studies demonstrate that the proposed method outperforms the baselines in terms of order response rate and total revenue.
Funder
National Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Theoretical Computer Science
Reference34 articles.
1. Didi Chuxing. Retrieved from http://www.didichuxing.com/en/.
2. New York City Taxi and Limousine Commission Dataset. Retrieved from https://www1.nyc.gov/site/.
3. New York City Trip Data. Retrieved from https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page.
4. Uber. Retrieved from https://www.uber.com.
5. ADAPT-pricing
Cited by
21 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献