Abstract
AbstractHuman interaction starts with a person approaching another one, respecting their personal space to prevent uncomfortable feelings. Spatial behavior, called proxemics, allows defining an acceptable distance so that the interaction process begins appropriately. In recent decades, human-agent interaction has been an area of interest for researchers, where it is proposed that artificial agents naturally interact with people. Thus, new alternatives are needed to allow optimal communication, avoiding humans feeling uncomfortable. Several works consider proxemic behavior with cognitive agents, where human-robot interaction techniques and machine learning are implemented. However, it is assumed that the personal space is fixed and known in advance, and the agent is only expected to make an optimal trajectory toward the person. In this work, we focus on studying the behavior of a reinforcement learning agent in a proxemic-based environment. Experiments were carried out implementing a grid-world problem and a continuous simulated robotic approaching environment. These environments assume that there is an issuer agent that provides non-conformity information. Our results suggest that the agent can identify regions where the issuer feels uncomfortable and find the best path to approach the issuer. The results obtained highlight the usefulness of reinforcement learning in order to identify proxemic regions.
Funder
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Conselho Nacional de Desenvolvimento Científico e Tecnológico
University of New South Wales
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Software
Reference50 articles.
1. Hall ET, Birdwhistell RL, Bock B, Bohannan P, Diebold AR Jr, Durbin M, Edmonson MS, Fischer J, Hymes D, Kimball ST et al (1968) Proxemics [and comments and replies]. Curr Anthropol 9(2/3):83–108
2. Zacharaki A, Kostavelis I, Gasteratos A, Dokas I (2020) Safety bounds in human robot interaction: a survey. Saf Sci 127:104667
3. Churamani N, Cruz F, Griffiths S, Barros P (2020) icub: learning emotion expressions using human reward. arXiv:2003.13483
4. Sutton RS, Barto AG (2018) Reinforcement learning: an introduction. MIT Press, Cambridge
5. Millán C, Fernandes B, Cruz F (2019) Human feedback in continuous actor-critic reinforcement learning. In: Proceedings European symposium on artificial neural networks, computational intelligence and machine learning, Bruges (Belgium), pp 661–666
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献