Affiliation:
1. University of Pittsburgh, Pittsburgh, USA
2. Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China and College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
3. Zhejiang University, Hangzhou, China
Abstract
With the development of embedded systems and deep learning, it is feasible to combine them for offering various and convenient human-centered services, which is based on high-quality (HQ) videos. However, due to the limit of video traffic load and unavoidable noise, the visual quality of an image from an edge camera may degrade significantly, influencing the overall video and service quality. To maintain video stability, video quality enhancement (QE), aiming at recovering HQ videos from their distorted low-quality (LQ) sources, has aroused increasing attention in recent years. The key challenge for video QE lies in how to effectively aggregate complementary information from multiple frames (i.e., temporal fusion). To handle diverse motion in videos, existing methods commonly apply motion compensation before the temporal fusion. However, the motion field estimated from the distorted LQ video tends to be inaccurate and unreliable, thereby resulting in ineffective fusion and restoration. In addition, motion estimation for consecutive frames is generally conducted in a pairwise manner, which leads to expensive and inefficient computation. In this article, we propose a fast yet effective temporal fusion scheme for video QE by incorporating a novel Spatio-Temporal Deformable Convolution (STDC) to simultaneously compensate motion and aggregate temporal information. Specifically, the proposed temporal fusion scheme takes a target frame along with its adjacent reference frames as input to jointly estimate an offset field to deform the spatio-temporal sampling positions of convolution. As a result, complementary information from multiple frames can be fused within the STDC operation in one forward pass. Extensive experimental results on three benchmark datasets show that our method performs favorably to the state of the art in terms of accuracy and efficiency.
Funder
Zhejiang Provincial Key R&D
Publisher
Association for Computing Machinery (ACM)
Reference55 articles.
1. Study of Temporal Effects on Subjective Video Quality of Experience
2. Fukun Bi, Jiayi Sun, Mingyang Lei, Yanping Wang, and Xiaodi Sun. 2020. Remote sensing target tracking for UAV aerial videos based on multi-frequency feature enhancement. In Proceedings of the 2020 IEEE International Geoscience and Remote Sensing Symposium(IGARSS’20). IEEE, 956–959.
3. VQeg validation and ITU standardization of objective perceptual video quality metrics [Standards in a Nutshell]
4. A Non-Local Algorithm for Image Denoising
5. Jose Caballero, Christian Ledig, Andrew Aitken, Alejandro Acosta, Johannes Totz, Zehan Wang, and Wenzhe Shi. 2017. Real-time video super-resolution with spatio-temporal networks and motion compensation. In Proceedings of the 2017 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’17). 4778–4787.