Affiliation:
1. Harbin Institute of Technology, Weihai, China
2. Institute of Computing Technology, Chinese Academy of Sciences and Peng Cheng Laboratory, China
3. SmartMore, China
4. University of Chinese Academy of Sciences, China
5. University of Chinese Academy of Sciences, Institute of Computing Technology, Chinese Academy of Sciences and Peng Cheng Laboratory, China
Abstract
Recently, with the vigorous development of deep learning and multimedia technology, intelligent urban computing has received more and more extensive attention from academia and industry. Unfortunately, most of the related technologies are black-box paradigms that lack interpretability. Among them, video event recognition is a basic technology. Event contains multiple concepts and their rich interactions, which can assist us to construct explainable event recognition methods. However, the crucial concepts needed to recognize events have various temporal existing patterns, and the relationship between events and the temporal characteristics of concepts has not been fully exploited. This brings great challenges for concept-based event categorization. To address the above issues, we introduce the temporal concept receptive field, which is the length of the temporal window size required to capture key concepts for concept-based event recognition methods. Accordingly, we introduce the temporal dynamic convolution (TDC) to model the temporal concept receptive field dynamically according to different events. Its core idea is to combine the results of multiple convolution layers with the learned coefficients from two complementary perspectives. These convolution layers contain a variety of kernel sizes, which can provide temporal concept receptive fields of different lengths. Similarly, we also propose the cross-domain temporal dynamic convolution (CrTDC) with the help of the rich relationship between different concepts. Different coefficients can help us to capture suitable temporal concept receptive field sizes and highlight crucial concepts, so as to obtain accurate and complete concept representations for event analysis. Based on the TDC and CrTDC, we introduce the temporal dynamic concept modeling network (TDCMN) for explainable video event recognition. We evaluate TDCMN on large-scale and challenging datasets FCVID, ActivityNet, and CCV. Experimental results show that TDCMN significantly improves the event recognition performance of concept-based methods, and the explainability of our method inspires us to construct more explainable models from the perspective of the temporal concept receptive field.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Hardware and Architecture
Reference85 articles.
1. How Deep Features Have Improved Event Recognition in Multimedia
2. Ensemble of Deep Models for Event Recognition
3. Recognition of Complex Events: Exploiting Temporal Dynamics between Underlying Concepts
4. Egor Burkov and Victor S. Lempitsky. 2018. Deep Neural Networks with Box Convolutions. In Advances in Neural Information Processing Systems Samy Bengio Hanna M. Wallach Hugo Larochelle Kristen Grauman Nicolò Cesa-Bianchi and Roman Garnett (Eds.). 6214–6224. Egor Burkov and Victor S. Lempitsky. 2018. Deep Neural Networks with Box Convolutions. In Advances in Neural Information Processing Systems Samy Bengio Hanna M. Wallach Hugo Larochelle Kristen Grauman Nicolò Cesa-Bianchi and Roman Garnett (Eds.). 6214–6224.
5. ActivityNet: A large-scale video benchmark for human activity understanding