A Short Video Classification Framework Based on Cross-Modal Fusion
Author:
Pang Nuo1, Guo Songlin2, Yan Ming2ORCID, Chan Chien Aun34ORCID
Affiliation:
1. School of Design, Dalian University of Science and Technology, Dalian 116052, China 2. School of Information and Communications Engineering, Communication University of China, Beijing 100024, China 3. Insta-Wireless, Notting Hill, VIC 3168, Australia 4. Department of Electrical and Electronic Engineering, The University of Melbourne, Parkville, VIC 3010, Australia
Abstract
The explosive growth of online short videos has brought great challenges to the efficient management of video content classification, retrieval, and recommendation. Video features for video management can be extracted from video image frames by various algorithms, and they have been proven to be effective in the video classification of sensor systems. However, frame-by-frame processing of video image frames not only requires huge computing power, but also classification algorithms based on a single modality of video features cannot meet the accuracy requirements in specific scenarios. In response to these concerns, we introduce a short video categorization architecture centered around cross-modal fusion in visual sensor systems which jointly utilizes video features and text features to classify short videos, avoiding processing a large number of image frames during classification. Firstly, the image space is extended to three-dimensional space–time by a self-attention mechanism, and a series of patches are extracted from a single image frame. Each patch is linearly mapped into the embedding layer of the Timesformer network and augmented with positional information to extract video features. Second, the text features of subtitles are extracted through the bidirectional encoder representation from the Transformers (BERT) pre-training model. Finally, cross-modal fusion is performed based on the extracted video and text features, resulting in improved accuracy for short video classification tasks. The outcomes of our experiments showcase a substantial superiority of our introduced classification framework compared to alternative baseline video classification methodologies. This framework can be applied in sensor systems for potential video classification.
Funder
Fundamental Research Funds for the Central Universities
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference40 articles.
1. Jin, M., Ning, Y., Liu, F., Zhao, F., Gao, Y., and Li, D. (2023). An Evaluation Model for the Influence of KOLs in Short Video Advertising Based on Uncertainty Theory. Symmetry, 15. 2. Ali, A., and Senan, N. (2016). Recent Advances on Soft Computing and Data Mining, Proceedings of the Second International Conference on Soft Computing and Data Mining (SCDM-2016), Bandung, Indonesia, 18–20 August 2016, Springer. 3. Trzcinski, T. (2018). Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments, SPIE. 4. An automatic event-complementing human life summarization scheme based on a social computing method over social media content;Ntalianis;Multimed. Tools Appl.,2016 5. A Review on Histogram of Oriented Gradient;Jain;IITM J. Manag. IT,2019
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|