Affiliation:
1. School of Electronic Information Dongguan Polytechnic Dongguan China
2. School of Information and Communications Engineering Xi'an Jiaotong University Xi'an China
Abstract
AbstractRecently, video summarization (VS) has emerged as one of the most effective tools for rapidly understanding video big data. Dictionary selection based on self‐representation and sparse regularization is consistent with the requirement of VS, which aims to represent the original video with little reconstruction error by a small number of video frames. However, one crucial issue is that the existing methods mainly use a single view feature, which is not sufficient enough for acquiring the full pictorial details and affects the quality of the produced video summary. Although a few methods use more than one features, they only directly concatenate the features, which does not take advantage of the relationship of different features. Considering the complementarity of shallow and deep features, multiview feature co‐factorization based dictionary selection for VS is proposed in this paper to use the common information of both view features for VS. Specifically, two view features are used to fully exploit the full pictorial information of video frames, then the common information of two different views is extracted through coupled matrix factorization to conduct the dictionary selection for VS. Experiments have been carried out on two benchmark datasets, and results have demonstrated the effectiveness and superiority of the proposed method.
Funder
National Natural Science Foundation of China
Publisher
Institution of Engineering and Technology (IET)
Subject
Electrical and Electronic Engineering,Computer Vision and Pattern Recognition,Signal Processing,Software