Affiliation:
1. Department of Computer Science, National University of Singapore, Singapore
Abstract
The partially observable Markov decision process (POMDP) provides a principled mathematical model for integrating perception and planning, a major challenge in robotics. While there are efficient algorithms for moderately large discrete POMDPs, continuous models are often more natural for robotic tasks, and currently there are no practical algorithms that handle continuous POMDPs at an interesting scale. This paper presents an algorithm for continuous-state, continuous-observation POMDPs. We provide experimental results demonstrating its potential in robot planning and learning under uncertainty and a theoretical analysis of its performance. A direct benefit of the algorithm is to simplify model construction.
Subject
Applied Mathematics,Artificial Intelligence,Electrical and Electronic Engineering,Mechanical Engineering,Modelling and Simulation,Software
Cited by
53 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Probabilistic Active Loop Closure for Autonomous Exploration;2024 IEEE International Conference on Robotics and Automation (ICRA);2024-05-13
2. Dynamic Game Theoretic Electric Vehicle Decision
Making;SAE International Journal of Electrified Vehicles;2024-01-16
3. Uncertainties in Onboard Algorithms for Autonomous Vehicles: Challenges, Mitigation, and Perspectives;IEEE Transactions on Intelligent Transportation Systems;2023-09
4. Adaptive Discretization using Voronoi Trees for Continuous POMDPs;The International Journal of Robotics Research;2023-08-08
5. Plan commitment: Replanning versus plan repair;Engineering Applications of Artificial Intelligence;2023-08