Video Action Recognition by Combining Spatial-Temporal Cues with Graph Convolutional Networks
-
Published:2023-08
Issue:10
Volume:37
Page:
-
ISSN:0218-0014
-
Container-title:International Journal of Pattern Recognition and Artificial Intelligence
-
language:en
-
Short-container-title:Int. J. Patt. Recogn. Artif. Intell.
Author:
Li Tao1ORCID,
Xiong Wenjun2,
Zhang Zheng2,
Pei Lishen3
Affiliation:
1. Department of Information Engineering, The Open University of Henan, Zhengzhou 450046, P. R. China
2. Resource Construction and Management Center, The Open University of Henan, Zhengzhou 450046, P. R. China
3. Department of Information Engineering, Henan University of Economics and Law, Zhengzhou 450046, P. R. China
Abstract
Video action recognition relies heavily on the way spatio-temporal cues are combined in order to enhance recognition accuracy. This issue can be addressed with explicit modeling of interactions among objects within or between videos, such as the graph neural network, which has been shown to accurately model and represent complicated spatial- temporal object relations for video action classification. However, the visual objects in the video are diversified, whereas the nodes in the graphs are fixed. This may result in information overload or loss if the visual objects are too redundant or insufficient for graph construction. Segment level graph convolutional networks (SLGCNs) are proposed as a method for recognizing actions in videos. The SLGCN consists of a segment-level spatial graph and a segment-level temporal graph, both of which are capable of simultaneously processing spatial and temporal information. Specifically, the segment-level spatial graph and the segment-level temporal graph are constructed using 2D and 3D CNNs to extract appearance and motion features from video segments. Graph convolutions are applied in order to obtain informative segment-level spatial-temporal features. A variety of challenging video datasets, such as EPIC-Kitchens, FCVID, HMDB51 and UCF101, are used to evaluate our method. In experiments, it is demonstrated that the SLGCN can achieve performance comparable to the state-of-the-art models in terms of obtaining spatial-temporal features.
Funder
the National Natural Science Foundation of China
Research Programs of Henan Science and Technology Department
Henan Province higher education teaching reform research project
the Key scientific research projects of colleges and universities in Henan Province
Publisher
World Scientific Pub Co Pte Ltd
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software