Evaluating the Performance of Mobile-Convolutional Neural Networks for Spatial and Temporal Human Action Recognition Analysis
Author:
Moutsis Stavros N.1ORCID, Tsintotas Konstantinos A.1ORCID, Kansizoglou Ioannis1ORCID, Gasteratos Antonios1ORCID
Affiliation:
1. Department of Production and Management Engineering, Democritus University of Thrace, 12 Vas. Sophias, GR-671 32 Xanthi, Greece
Abstract
Human action recognition is a computer vision task that identifies how a person or a group acts on a video sequence. Various methods that rely on deep-learning techniques, such as two- or three-dimensional convolutional neural networks (2D-CNNs, 3D-CNNs), recurrent neural networks (RNNs), and vision transformers (ViT), have been proposed to address this problem over the years. Motivated by the fact that most of the used CNNs in human action recognition present high complexity, and the necessity of implementations on mobile platforms that are characterized by restricted computational resources, in this article, we conduct an extensive evaluation protocol over the performance metrics of five lightweight architectures. In particular, we examine how these mobile-oriented CNNs (viz., ShuffleNet-v2, EfficientNet-b0, MobileNet-v3, and GhostNet) execute in spatial analysis compared to a recent tiny ViT, namely EVA-02-Ti, and a higher computational model, ResNet-50. Our models, previously trained on ImageNet and BU101, are measured for their classification accuracy on HMDB51, UCF101, and six classes of the NTU dataset. The average and max scores, as well as the voting approaches, are generated through three and fifteen RGB frames of each video, while two different rates for the dropout layers were assessed during the training. Last, a temporal analysis via multiple types of RNNs that employ features extracted by the trained networks is examined. Our results reveal that EfficientNet-b0 and EVA-02-Ti surpass the other mobile-CNNs, achieving comparable or superior performance to ResNet-50.
Funder
European Union and Greek national funds
Subject
Artificial Intelligence,Control and Optimization,Mechanical Engineering
Reference144 articles.
1. Zhang, H.B., Zhang, Y.X., Zhong, B., Lei, Q., Yang, L., Du, J.X., and Chen, D.S. (2019). A comprehensive survey of vision-based human action recognition methods. Sensors, 19. 2. Arseneau, S., and Cooperstock, J.R. (1999, January 22–24). Real-time image segmentation for action recognition. Proceedings of the 1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999), Conference Proceedings (Cat. No. 99CH36368), Victoria, BC, Canada. 3. A method for human action recognition;Masoud;Image Vis. Comput.,2003 4. A tensor-based deep learning framework;Charalampous;Image Vis. Comput.,2014 5. Gammulle, H., Ahmedt-Aristizabal, D., Denman, S., Tychsen-Smith, L., Petersson, L., and Fookes, C. (2022). Continuous Human Action Recognition for Human-Machine Interaction: A Review. arXiv.
|
|