Funder
National Science Foundation
Publisher
Springer Science and Business Media LLC
Reference39 articles.
1. Abbeel, P., & Ng, A.Y. (2004). Apprenticeship learning via inverse reinforcement learning. In Twenty-first international conference on machine learning (ICML), pp. 1–8.
2. Aghasadeghi, N., & Bretl, T. (2011). Maximum entropy inverse reinforcement learning in continuous state spaces with path integrals. In: 2011 IEEE/RSJ International conference on intelligent robots and systems, pp. 1561–1566.
3. Amin, K., Jiang, N., & Singh, S. (2017). Repeated inverse reinforcement learning. In Advances in neural information processing systems, pp. 1815–1824.
4. Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5), 469–483.
5. Arora, S., & Doshi, P. (2018). A survey of inverse reinforcement learning: Challenges, methods and progress. CoRR arXiv:1806.06877
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Calibrated Human-Robot Teaching: What People Do When Teaching Norms to Robots*;2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN);2023-08-28
2. Norm Learning with Reward Models from Instructive and Evaluative Feedback;2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN);2022-08-29
3. Learning Reward Functions from a Combination of Demonstration and Evaluative Feedback;2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI);2022-03-07