Author:
Chen Shuo,Xu Ke,Jiang Xinghao,Sun Tanfeng
Abstract
Although graph convolutional networks (GCNs) have shown their demonstrated ability in skeleton-based action recognition, both the spatial and the temporal connections rely too much on the predefined skeleton graph, which imposes a fixed prior knowledge for the aggregation of high-level semantic information via the graph-based convolution. Some previous GCN-based works introduced dynamic topology (vertex connection relationships) to capture flexible spatial correlations from different actions. Then, the local relationships from both the spatial and temporal domains can be captured by diverse GCNs. This paper introduces a more straightforward and more effective backbone to obtain the spatial-temporal correlation between skeleton joints with a local-global alternation pyramid architecture for skeleton-based action recognition, namely the pyramid spatial-temporal graph transformer (PGT). The PGT consists of four stages with similar architecture but different scales: graph embedding and transformer blocks. We introduce two kinds of transformer blocks in our work: the spatial-temporal transformer block and joint transformer block. In the former, spatial-temporal separated attention (STSA) is proposed to calculate the connection of the global nodes of the graph. Due to the spatial-temporal transformer block, self-attention can be performed on skeleton graphs with long-range temporal and large-scale spatial aggregation. The joint transformer block flattens the tokens in both the spatial and temporal domains to jointly capture the overall spatial-temporal correlations. The PGT is evaluated on three public skeleton datasets: the NTU RGBD 60, NTU RGBD 120 and NW-UCLA datasets. Better or comparable performance with the state of the art (SOTA) shows the effectiveness of our work.
Funder
Nature Natural Science Foundation of China
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference50 articles.
1. Human Action Recognition by Representing 3D Skeletons as Points in a Lie Group;Vemulapalli;Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014,2014
2. Enhanced skeleton visualization for view invariant human action recognition
3. Skeleton based action recognition using translation-scale invariant image mapping and multi-scale deep CNN;Li;Proceedings of the 2017 IEEE International Conference on Multimedia & Expo Workshops, ICME Workshops,2017
4. Skeleton-based action recognition with convolutional neural networks;Li;Proceedings of the 2017 IEEE International Conference on Multimedia & Expo Workshops, ICME Workshops,2017
5. View Adaptive Neural Networks for High Performance Skeleton-Based Human Action Recognition
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献