Author:
Li Mengxi,Kwon Minae,Sadigh Dorsa
Funder
National Science Foundation
Qualcomm
Publisher
Springer Science and Business Media LLC
Reference88 articles.
1. Abbeel, P., & Ng, A. Y., (2004) Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning (p. 1). ACM.
2. Agha-Mohammadi, A. A., Chakravorty, S., & Amato, N. M. (2014). Firm: Sampling-based feedback motion-planning under motion uncertainty and imperfect measurements. The International Journal of Robotics Research, 33(2), 268–304.
3. Akgun, B., Cakmak, M., Jiang, K., & Thomaz, A. L. (2012). Keyframe-based learning from demonstration. International Journal of Social Robotics, 4(4), 343–355.
4. Albrecht, S. V. (2015). Utilising policy types for effective ad hoc coordination in multiagent systems.
5. Ba, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer normalization. arXiv preprint arXiv:1607.06450.
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. A Model-Free Leader-Follower Approach with Multi-Level Reference Command Generators;2024 IEEE International Symposium on Robotic and Sensors Environments (ROSE);2024-06-20
2. Performance-based Data-driven Assessment of Trust;2024 IEEE 4th International Conference on Human-Machine Systems (ICHMS);2024-05-15
3. Learning latent representations to co-adapt to humans;Autonomous Robots;2023-06-17
4. Towards Robots that Influence Humans over Long-Term Interaction;2023 IEEE International Conference on Robotics and Automation (ICRA);2023-05-29
5. A survey of multi-agent Human–Robot Interaction systems;Robotics and Autonomous Systems;2023-03