Author:
Hu Zhengping,Mao Jianzeng,Yao Jianxin,Bi Shuai
Abstract
Modern action recognition techniques frequently employ two networks: the spatial stream, which accepts input from RGB frames, and the temporal stream, which accepts input from optical flow. Recent researches use 3D convolutional neural networks that employ spatiotemporal filters on both streams. Although mixing flow with RGB enhances performance, correct optical flow computation is expensive and adds delay to action recognition. In this study, we present a method for training a 3D CNN using RGB frames that replicates the motion stream and, as a result, does not require flow calculation during testing. To begin, in contrast to the SE block, we suggest a channel excitation module (CE module). Experiments have shown that the CE module can improve the feature extraction capabilities of a 3D network and that the effect is superior to the SE block. Second, for action recognition training, we adopt a linear mix of loss based on knowledge distillation and standard cross-entropy loss to effectively leverage appearance and motion information. The Intensified Motion RGB Stream is the stream trained with this combined loss (IMRS). IMRS surpasses RGB or Flow as a single stream; for example, HMDB51 achieves 73.5% accuracy, while RGB and Flow streams score 65.6% and 69.1% accuracy, respectively. Extensive experiments confirm the effectiveness of our proposed method. The comparison with other models proves that our model has good competitiveness in behavior recognition.
Funder
National Natural Science Foundation of China
Subject
Artificial Intelligence,Biomedical Engineering
Reference45 articles.
1. “Quo vadis, action recognition? A new model and the kinetics dataset,”;Carreira,2017
2. “Two-stream video classification with cross-modality attention,”;Chi,2019
3. “MARS: motion-augmented RGB stream for action recognition,”;Crasto,2019
4. Forecasting action through contact representations from first person video;Dessalene;IEEE Transactions on Pattern Analysis and Machine Intelligence PP,2021
5. “Spatio-temporal channel correlation networks for action classification,”;Diba,2018
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. DILS: depth incremental learning strategy;Frontiers in Neurorobotics;2024-01-08