Affiliation:
1. College of Computer Science and Software Engineering, Hohai University, Nanjing 211100, China
2. College of Information Science and Technology and College of Artificial Intelligence, Nanjing Forestry University, Nanjing 210037, China
Abstract
Human action recognition has facilitated the development of artificial intelligence devices focusing on human activities and services. This technology has progressed by introducing 3D point clouds derived from depth cameras or radars. However, human behavior is intricate, and the involved point clouds are vast, disordered, and complicated, posing challenges to 3D action recognition. To solve these problems, we propose a Symmetric Fine-coarse Neural Network (SFCNet) that simultaneously analyzes human actions’ appearance and details. Firstly, the point cloud sequences are transformed and voxelized into structured 3D voxel sets. These sets are then augmented with an interval-frequency descriptor to generate 6D features capturing spatiotemporal dynamic information. By evaluating voxel space occupancy using thresholding, we can effectively identify the essential parts. After that, all the voxels with the 6D feature are directed to the global coarse stream, while the voxels within the key parts are routed to the local fine stream. These two streams extract global appearance features and critical body parts by utilizing symmetric PointNet++. Subsequently, attention feature fusion is employed to capture more discriminative motion patterns adaptively. Experiments conducted on public benchmark datasets NTU RGB+D 60 and NTU RGB+D 120 validate SFCNet’s effectiveness and superiority for 3D action recognition.
Funder
Postgraduate Research & Practice Innovation Program of Jiangsu Province
Fundamental Research Funds for the Central Universities
Key Research and Development Program of China
Key Research and Development Program of China, Yunnan Province
14th Five-Year Plan for Educational Science of Jiangsu Province
Jiangsu Higher Education Reform Research Project