DANet: Temporal Action Localization with Double Attention
-
Published:2023-06-15
Issue:12
Volume:13
Page:7176
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Sun Jianing1, Wu Xuan1, Xiao Yubin1, Wu Chunguo1, Liang Yanchun2, Liang Yi3, Wang Liupu1, Zhou You1ORCID
Affiliation:
1. Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, College of Computer Science and Technology, Jilin University, Changchun 130012, China 2. School of Computer Science, Zhuhai College of Science and Technology, Zhuhai 519041, China 3. College of Business and Administration, Jilin University, Changchun 130012, China
Abstract
Temporal action localization (TAL) aims to predict action instance categories in videos and identify their start and end times. However, existing Transformer-based backbones focus only on global or local features, resulting in the loss of information. In addition, both global and local self-attention mechanisms tend to average embeddings, thereby reducing the preservation of critical features. To solve these two problems better, we propose two kinds of attention mechanisms, namely multi-headed local self-attention (MLSA) and max-average pooling attention (MA) to extract simultaneously local and global features. In MA, max-pooling is used to select the most critical information from local clip embeddings instead of averaging embeddings, and average-pooling is used to aggregate global features. We use MLSA for modeling local temporal context. In addition, to enhance collaboration between MA and MLSA, we propose the double attention block (DABlock), comprising MA and MLSA. Finally, we propose the final network double attention network (DANet), composed of DABlocks and other advanced blocks. To evaluate DANet’s performance, we conduct extensive experiments for the TAL task. Experimental results demonstrate that DANet outperforms the other state-of-the-art models on all datasets. Finally, ablation studies demonstrate the effectiveness of the proposed MLSA and MA. Compared with structures using backbone with convolution and global Transformer, DABlock consisting of MLSA and MA has a superior performance, achieving an 8% and 0.5% improvement on overall average mAP, respectively.
Funder
National Key Research and Development Program of China Jilin Provincial Department of Science and Technology Project National Natural Science Foundation of China Guangdong Universities’ Innovation Team Project Key Disciplines
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference56 articles.
1. Huang, D.A., Ramanathan, V., Mahajan, D., Torresani, L., Paluri, M., Fei-Fei, L., and Niebles, J.C. (2018, January 18–23). What makes a video a video: Analyzing temporal information in video understanding models and datasets. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA. 2. Wu, C.Y., Feichtenhofer, C., Fan, H., He, K., Krahenbuhl, P., and Girshick, R. (2019, January 15–20). Long-term feature banks for detailed video understanding. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA. 3. Lin, J., Gan, C., and Han, S. (November, January 27). Tsm: Temporal shift module for efficient video understanding. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, South of Korea. 4. Lin, T., Zhao, X., Su, H., Wang, C., and Yang, M. (2018, January 8–14). Bsn: Boundary sensitive network for temporal action proposal generation. Proceedings of the European Conference on Computer Vision, Munich, Germany. 5. Lin, C., Xu, C., Luo, D., Wang, Y., Tai, Y., Wang, C., Li, J., Huang, F., and Fu, Y. (2021, January 19–25). Learning salient boundary feature for anchor-free temporal action localization. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.
|
|