Affiliation:
1. Department of Artificial Intelligence, Sungkyunkwan University, Suwon 16419, Republic of Korea
2. Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
Abstract
For successful human–robot collaboration, it is crucial to establish and sustain quality interaction between humans and robots, making it essential to facilitate human–robot interaction (HRI) effectively. The evolution of robot intelligence now enables robots to take a proactive role in initiating and sustaining HRI, thereby allowing humans to concentrate more on their primary tasks. In this paper, we introduce a system known as the Robot-Facilitated Interaction System (RFIS), where mobile robots are employed to perform identification, tracking, re-identification, and gesture recognition in an integrated framework to ensure anytime readiness for HRI. We implemented the RFIS on an autonomous mobile robot used for transporting a patient, to demonstrate proactive, real-time, and user-friendly interaction with a caretaker involved in monitoring and nursing the patient. In the implementation, we focused on the efficient and robust integration of various interaction facilitation modules within a real-time HRI system that operates in an edge computing environment. Experimental results show that the RFIS, as a comprehensive system integrating caretaker recognition, tracking, re-identification, and gesture recognition, can provide an overall high quality of interaction in HRI facilitation with average accuracies exceeding 90% during real-time operations at 5 FPS.
Funder
Ministry of Science and ICT
Korea Evaluation Institute of Industrial Technology
Reference47 articles.
1. Lee, I. (2021). Service robots: A systematic literature review. Electronics, 10.
2. Lee, S., Lee, S., Kim, S., and Kim, A. (2023, January 4–7). Robot-Facilitated Human–Robot Interaction with Integrated Tracking, Re-identification and Gesture Recognition. Proceedings of the International Conference on Intelligent Autonomous Systems, Suwon, Republic of Korea.
3. Visual attention model for mobile robot navigation in domestic environment;Sanjeewa;GSJ,2020
4. Zhao, X., Naguib, A.M., and Lee, S. (2014, January 25–29). Kinect based calling gesture recognition for taking order service of elderly care robot. Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK.
5. Liu, C., and Szirányi, T. (2021). Real-time human detection and gesture recognition for on-board UAV rescue. Sensors, 21.