Affiliation:
1. University of Science and Technology of Oran Mohamed Boudiaf
Abstract
This paper presents a facial expressions recognition system to command both mobile and arm robot. The proposed system mainly consists of two modules: facial expressions recognition and robots command. The first module aims to extract the ROI (Region Of Interest like: mouth, eyes, eyebrow) using Gradient Vector Flow (GVF) snake segmentation and the Euclidian distance calculation (compatible with the MPEG-4 description of the six universal emotions). To preserve the temporal aspect of the processing from FEEDTUM database (video file), Time Delay Neural Network (TDNN) is used as classifier of the universal facial expressions such as happiness, sadness, surprise, anger, fear, disgust and neutral. While the second module, analyzes recognized facial expressions and translates them into a language to communicate with robots by establishing command law.
Publisher
Trans Tech Publications, Ltd.
Reference22 articles.
1. R. Mittal, P. Srivastava, A. George, and A. Mukherjee, Autonomous robot control using facial expressions, IJCTE. 4(4), (2012).
2. D. J. S. Ju, E. Y. Kim, Intelligent wheelchair (IW) interface using face and mouth recognition, IEEE Trans on CSVT, 9(4), 551-564 (1999).
3. P. M. Faria, R. A. Braga, M. E. Valgode, and L. P. Reis, Interface framework to drive an intelligent wheelchair using facial expressions, Third IEEE International Conference on Automatic Face and Gesture Recognition (FG '98), 124-129, Nara, Japan, (1998).
4. B. Shim, K. K. Kang, W. W. Lee, S. J.B. Won and S.H. Han, An intelligent control of mobile robot based on voice command, International Conference on Control, Automation and Systems, 2107, South Korea, (2010).
5. L. Zhentao, W. Min, C. Weihua, C. Luefeng, X. Jianping, Z. Ri, Z. Mengtian and M. Junwei, A Facial Expression Emotion Recognition Based Human-robot Interaction System, IEEE/CAA JAS, 4(4), 668-676 (2017).
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献