Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks
Author:
Xu Dan1ORCID, Fan Xiaopeng12, Gao Wen23
Affiliation:
1. School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China 2. Pengcheng Laboratory, Shenzhen 518052, China 3. School of Electronic Engineering and Computer Science, Peking University, Beijing 100871, China
Abstract
Color images have long been used as an important supplementary information to guide the super-resolution of depth maps. However, how to quantitatively measure the guiding effect of color images on depth maps has always been a neglected issue. To solve this problem, inspired by the recent excellent results achieved in color image super-resolution by generative adversarial networks, we propose a depth map super-resolution framework with generative adversarial networks using multiscale attention fusion. Fusion of the color features and depth features at the same scale under the hierarchical fusion attention module effectively measure the guiding effect of the color image on the depth map. The fusion of joint color–depth features at different scales balances the impact of different scale features on the super-resolution of the depth map. The loss function of a generator composed of content loss, adversarial loss, and edge loss helps restore clearer edges of the depth map. Experimental results on different types of benchmark depth map datasets show that the proposed multiscale attention fusion based depth map super-resolution framework has significant subjective and objective improvements over the latest algorithms, verifying the validity and generalization ability of the model.
Funder
National High Technology Research and Development Program of China National Natural Science Foundation of China
Subject
General Physics and Astronomy
Reference46 articles.
1. Joint Bilateral Upsampling;Kopf;ACM Trans. Graph.,2007 2. Diebel, J., and Thrun, S. (2005, January 5–8). An Application of Markov Random Fields to Range Sensing. Proceedings of the 18th International Conference on Neural Information Processing Systems, Cambridge, MA, USA. 3. Hui, T.W., Loy, C.C., and Tang, X. (2016, January 11–14). Depth Map Super-Resolution by Deep Multi-Scale Guidance. Proceedings of the 14th European Conference, Amsterdam, The Netherlands. 4. Ye, X., Duan, X., and Li, H. (2018, January 15–20). Depth Super-Resolution with Deep Edge-Inference Network and Edge-Guided Depth Filling. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada. 5. Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., and Wang, Z. (2017, January 21–26). Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|