Proxemic behavior in navigation tasks using reinforcement learning

Author:

Millán-Arias CristianORCID,Fernandes Bruno,Cruz Francisco

Abstract

AbstractHuman interaction starts with a person approaching another one, respecting their personal space to prevent uncomfortable feelings. Spatial behavior, called proxemics, allows defining an acceptable distance so that the interaction process begins appropriately. In recent decades, human-agent interaction has been an area of interest for researchers, where it is proposed that artificial agents naturally interact with people. Thus, new alternatives are needed to allow optimal communication, avoiding humans feeling uncomfortable. Several works consider proxemic behavior with cognitive agents, where human-robot interaction techniques and machine learning are implemented. However, it is assumed that the personal space is fixed and known in advance, and the agent is only expected to make an optimal trajectory toward the person. In this work, we focus on studying the behavior of a reinforcement learning agent in a proxemic-based environment. Experiments were carried out implementing a grid-world problem and a continuous simulated robotic approaching environment. These environments assume that there is an issuer agent that provides non-conformity information. Our results suggest that the agent can identify regions where the issuer feels uncomfortable and find the best path to approach the issuer. The results obtained highlight the usefulness of reinforcement learning in order to identify proxemic regions.

Funder

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Conselho Nacional de Desenvolvimento Científico e Tecnológico

University of New South Wales

Publisher

Springer Science and Business Media LLC

Subject

Artificial Intelligence,Software

Reference50 articles.

1. Hall ET, Birdwhistell RL, Bock B, Bohannan P, Diebold AR Jr, Durbin M, Edmonson MS, Fischer J, Hymes D, Kimball ST et al (1968) Proxemics [and comments and replies]. Curr Anthropol 9(2/3):83–108

2. Zacharaki A, Kostavelis I, Gasteratos A, Dokas I (2020) Safety bounds in human robot interaction: a survey. Saf Sci 127:104667

3. Churamani N, Cruz F, Griffiths S, Barros P (2020) icub: learning emotion expressions using human reward. arXiv:2003.13483

4. Sutton RS, Barto AG (2018) Reinforcement learning: an introduction. MIT Press, Cambridge

5. Millán C, Fernandes B, Cruz F (2019) Human feedback in continuous actor-critic reinforcement learning. In: Proceedings European symposium on artificial neural networks, computational intelligence and machine learning, Bruges (Belgium), pp 661–666

Cited by 5 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Using Proxemics as a Corrective Feedback Signal during Robot Navigation;Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction;2024-03-11

2. Decentralized variable impedance control of modular robot manipulators with physical human–robot interaction using Gaussian process-based motion intention estimation;Neural Computing and Applications;2024-02-16

3. Real-Life Experiment Metrics for Evaluating Human-Robot Collaborative Navigation Tasks;2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN);2023-08-28

4. Social distance control for quadruped robots in a gated spike filter neural network framework;Applied Intelligence;2023-07-17

5. Designing INS/GNSS integrated navigation systems by using IPO algorithms;Neural Computing and Applications;2023-04-12

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3