Abstract
AbstractIn this study, we investigate the container delivery problem and explore ways to optimize the complex and nuanced system of inland container shipping. Our aim is to fulfill customer demand while maximizing customer service and minimizing logistics costs. To address the challenges posed by an unpredictable and rapidly-evolving environment, we examine the potential of leveraging reinforcement learning (RL) to automate the decision-making process and craft agile, efficient delivery schedules. Through a rigorous and comprehensive numerical study, we evaluate the efficacy of this approach by comparing the performance of several high-performance heuristic policies with that of agents trained using reinforcement learning, under various problem settings. Our results demonstrate that a reinforcement learning approach is robust and particularly useful for decision makers who must match logistics demand with capacity dynamically and have multiple objectives.
Publisher
Springer Science and Business Media LLC
Reference25 articles.
1. Cheung, R. K., & Hang, Darren D. (2003). A time-window sliding procedure for driver-task assignment with random service times. IIE Transactions, 35(5), 433–444.
2. Cheung, R. K., Hang, D. D., & Ning, S. (2005). A labeling method for dynamic driver-task assignment with uncertain task durations. Operations Research Letters, 33(4), 411–420.
3. Daham, H., Yang, X., & Michaela, W. (2016). An efficient mixed integer programming model for pairing containers in inland transportation based on the assignment of orders. Journal of the Operational Research Society, 68, 12.
4. Hessel, M., Modayil, J., Van Hasselt, H., Schaul, T., Ostrovski, G., Dabney, W., Horgan, D., Piot, B., Azar, M., & Silver, D. (2018). Rainbow: Combining improvements in deep reinforcement learning. In Thirty-second AAAI conference on artificial intelligence.
5. Hyland, M., & Mahmassani, H. (2018). Dynamic autonomous vehicle fleet operations: Optimization-based strategies to assign AVs to immediate traveler demand requests. Transportation Research Part C Emerging Technologies, 92, 278–297.