ABDGAN: Arbitrary Time Blur Decomposition Using Critic-Guided TripleGAN
Author:
Lee Tae Bok1ORCID, Heo Yong Seok12ORCID
Affiliation:
1. Department of Artificial Intelligence, Ajou University, Suwon 16499, Republic of Korea 2. Department of Electrical and Computer Engineering, Ajou University, Suwon 16499, Republic of Korea
Abstract
Recent studies have proposed methods for extracting latent sharp frames from a single blurred image. However, these methods still suffer from limitations in restoring satisfactory images. In addition, most existing methods are limited to decomposing a blurred image into sharp frames with a fixed frame rate. To address these problems, we present an Arbitrary Time Blur Decomposition Triple Generative Adversarial Network (ABDGAN) that restores sharp frames with flexible frame rates. Our framework plays a min–max game consisting of a generator, a discriminator, and a time-code predictor. The generator serves as a time-conditional deblurring network, while the discriminator and the label predictor provide feedback to the generator on producing realistic and sharp image depending on given time code. To provide adequate feedback for the generator, we propose a critic-guided (CG) loss by collaboration of the discriminator and time-code predictor. We also propose a pairwise order-consistency (POC) loss to ensure that each pixel in a predicted image consistently corresponds to the same ground-truth frame. Extensive experiments show that our method outperforms previously reported methods in both qualitative and quantitative evaluations. Compared to the best competitor, the proposed ABDGAN improves PSNR, SSIM, and LPIPS on the GoPro test set by 16.67%, 9.16%, and 36.61%, respectively. For the B-Aist++ test set, our method shows improvements of 6.99%, 2.38%, and 17.05% in PSNR, SSIM, and LPIPS, respectively, compared to the best competitive method.
Funder
National Research Foundation of Korea
Reference54 articles.
1. Nah, S., Kim, T.H., and Lee, K.M. (2017, January 22–25). Deep Multi-Scale Convolutional Neural Network for Dynamic Scene Deblurring. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA. 2. Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., and Wang, O. (2017, January 22–25). Deep video deblurring for hand-held cameras. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA. 3. Nah, S., Baik, S., Hong, S., Moon, G., Son, S., Timofte, R., and Lee, K.M. (2019, January 16–20). NTIRE 2019 Challenge on Video Deblurring and Super-Resolution: Dataset and Study. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Long Beach, CA, USA. 4. Zhong, Z., Sun, X., Wu, Z., Zheng, Y., Lin, S., and Sato, I. (2022, January 23–27). Animation from blur: Multi-modal blur decomposition with motion guidance. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel. 5. Jin, M., Meishvili, G., and Favaro, P. (2018, January 18–22). Learning to extract a video sequence from a single motion-blurred image. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.
|
|