Abstract
Depth information captured by affordable depth sensors is characterized by low spatial resolution, which limits potential applications. Several methods have recently been proposed for guided super-resolution of depth maps using convolutional neural networks to overcome this limitation. In a guided super-resolution scheme, high-resolution depth maps are inferred from low-resolution ones with the additional guidance of a corresponding high-resolution intensity image. However, these methods are still prone to texture copying issues due to improper guidance by the intensity image. We propose a multi-scale residual deep network for depth map super-resolution. A cascaded transformer module incorporates high-resolution structural information from the intensity image into the depth upsampling process. The proposed cascaded transformer module achieves linear complexity in image resolution, making it applicable to high-resolution images. Extensive experiments demonstrate that the proposed method outperforms state-of-the-art techniques for guided depth super-resolution.
Reference60 articles.
1. A Naturalistic Open Source Movie for Optical Flow Evaluation;Butler,2012
2. End-to-end Object Detection with Transformers;Carion,2020
3. Rgb Guided Depth Map Super-resolution with Coupled U-Net;Cui,2021
4. Deformable Convolutional Networks;Dai,2017
5. Color-guided Depth Recovery via Joint Local Structural and Nonlocal Low-Rank Regularization;Dong;IEEE Trans. Multimedia,2016
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献