Author:
Tasnim Nusrat,Baek Joong-Hwan
Abstract
To provide accessible, intelligent, and efficient remote access such as the internet of things, rehabilitation, autonomous driving, virtual games, and healthcare, human action recognition (HAR) has gained much attention among computer vision researchers. Several methods have already been addressed to ensure effective and efficient action recognition based on different perspectives including data modalities, feature design, network configuration, and application domains. In this article, we design a new deep learning model by integrating criss-cross attention and edge convolution to extract discriminative features from the skeleton sequence for action recognition. The attention mechanism is applied in spatial and temporal directions to pursue the intra- and inter-frame relationships. Then, several edge convolutional layers are conducted to explore the geometric relationships among the neighboring joints in the human body. The proposed model is dynamically updated after each layer by recomputing the graph on the basis of k-nearest joints for learning local and global information in action sequences. We used publicly available benchmark skeleton datasets such as UTD-MHAD (University of Texas at Dallas multimodal human action dataset) and MSR-Action3D (Microsoft action 3D) to evaluate the proposed method. We also investigated the proposed method with different configurations of network architectures to assure effectiveness and robustness. The proposed method achieved average accuracies of 99.53% and 95.64% on the UTD-MHAD and MSR-Action3D datasets, respectively, outperforming state-of-the-art methods.
Funder
GRRC program of Gyeonggi province
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference48 articles.
1. Chu, X., Ouyang, W., Li, H., and Wang, X. (July, January 26). Structured feature learning for pose estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.
2. A deep learning framework for assessing physical rehabilitation exercises;Liao;IEEE Trans. Neural Syst. Rehabil. Eng.,2020
3. A vision-based system for intelligent monitoring: Human behaviour analysis and privacy by context;Chaaraoui;Sensors,2014
4. Wen, R., Nguyen, B.P., Chng, C.B., and Chui, C.K. (2013, January 5–6). In Situ Spatial AR Surgical Planning Using projector-Kinect System. Proceedings of the Fourth Symposium on Information and Communication Technology, Da Nang, Vietnam.
5. A survey of augmented reality;Azuma;Presence Teleoperators Virtual Environ.,1997
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献