Abstract
AbstractInformative path-planning is a well established approach to visual-servoing and active viewpoint selection in robotics, but typically assumes that a suitable cost function or goal state is known. This work considers the inverse problem, where the goal of the task is unknown, and a reward function needs to be inferred from exploratory example demonstrations provided by a demonstrator, for use in a downstream informative path-planning policy. Unfortunately, many existing reward inference strategies are unsuited to this class of problems, due to the exploratory nature of the demonstrations. In this paper, we propose an alternative approach to cope with the class of problems where these sub-optimal, exploratory demonstrations occur. We hypothesise that, in tasks which require discovery, successive states of any demonstration are progressively more likely to be associated with a higher reward, and use this hypothesis to generate time-based binary comparison outcomes and infer reward functions that support these ranks, under a probabilistic generative model. We formalise this probabilistic temporal ranking approach and show that it improves upon existing approaches to perform reward inference for autonomous ultrasound scanning, a novel application of learning from demonstration in medical imaging while also being of value across a broad range of goal-oriented learning from demonstration tasks.
Funder
Alan Turing Institute
Royal Society
Engineering and Physical Sciences Research Council
Publisher
Springer Science and Business Media LLC
Reference61 articles.
1. Abbeel, P., & Ng, AY. (2004). Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, ACM, p 1 https://doi.org/10.1145/1015330.1015430
2. Abolmaesumi, P., Salcudean, S. E., Zhu, Wen-Hong., Sirouspour, M. R., & DiMaio, S. P. (2002). Image-guided control of a robot for medical ultrasound. IEEE Transactions on Robotics and Automation, 18(1), 11–23. https://doi.org/10.1109/70.988970
3. Angelov, D., Hristov, Y., Burke, M., Ramamoorthy, S. (2020). Composing diverse policies for temporally extended tasks. Robotics and automation letters (RA-L) arXiv:1907.08199.
4. Bagnell, JAD. (2015). An Invitation to Imitation. Tech. Rep. CMU-RI-TR-15-08, Carnegie Mellon University, Pittsburgh, PA https://www.ri.cmu.edu/pub_files/2015/3/InvitationToImitation_3_1415.pdf.
5. Barto, AG. (2013). Intrinsic motivation and reinforcement learning. In Intrinsically Motivated Learning in natural and Artificial Systems, Springer, pp. 17–47.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献