Affiliation:
1. Guangdong University of Technology
Abstract
Abstract
In recent years, TransFormer has made remarkable achievements in a variety of tasks in computer vision. However, the Transformer-based methods have limitations in learning multi-scale features of skeleton data, while the multi-scale spatial temporal features contain potential both global and local information, which is crucial for skeleton-based action recognition.In this work, we explore the multi-scale feature representation of skeleton sequence in both the spatial and temporal dimensions, and propose an efficient cross-attention mechanism for cross-scale feature fusion. Moreover, we propose a Multi-scale Feature Extraction and Fusion Transformer (MFEF-Former) , which can be divided into two types: (1) MFEF-SFormer for spatial modeling, which captures the inter-joint and inter-part correlations with self-attention, then performs multi-scale spatial feature fusion with cross-attention to model the correlations between joints and body parts. (2) MFEF-TFormer for temporal modeling, which captures the multi-scale temporal feature with self-attention and fuses the multi-scale feature with cross-attention. These two components are combined in a two-stream network, which is evaluated on two large-scale datasets, NTU RGB+D and NTU RGB+D 120. The experiments show that our proposed method outperforms other Transformer-based methods on skeleton-based action recognition and achieves state-of-the-art performance.
Publisher
Research Square Platform LLC