Multi-Scale Aggregation Stereo Matching Network Based on Dense Grouping Atrous Convolution
-
Published:2023-06-11
Issue:12
Volume:13
Page:7033
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Zou Qijie1, Zhang Jie1, Chen Shuang1, Gao Bing1, Qin Jing1, Dong Aotian1
Affiliation:
1. Information Engineering Faculty, Dalian University, Dalian 116622, China
Abstract
The key to image depth estimation is to accurately find corresponding points between the left and right images. A binocular camera can directly estimate the depth of the left and right image range, which completely avoids the requirement of target recognition accuracy in a monocular depth estimation. However, it is difficult for binocular stereo matching to accurately segment objects and find matching points in the ill-posed areas (weak texture, deformation, object edge, etc.) of the left and right images. In the semantic segmentation task, atrous convolution is used to solve the contradiction between the receptive field and the segmentation accuracy. This research focused on balancing the impact of holes on the segmentation task. In addition, in order to solve the issue where matching points in the ill-posed regions of left and right images are affected by noise, we used a 3D convolution to aggregate the cost volume to obtain better accuracy. However, the 3D convolution method is prone to mismatching in ill-posed areas of the image. To tackle the problems above, we proposed a dense grouping atrous convolution spatial pyramid pooling (DenseGASPP) method. The feature of the DenseGASPP method is that there is a dense connection between the group atrous convolutions to fully integrate feature information. This method can expand the receptive field and balance the effect of holes on the segmentation task. Moreover, we introduced multi-scale cost aggregation into our method, which uses the repeated exchange of information between cost volumes of different scales to obtain rich contextual information and reduce the mismatching of the network. To evaluate the performance of our method, we conducted several groups of typical algorithm experiments on the scene flow and KITTI 2015 standard datasets. From the results, our model achieves better performance, reducing the EPE from 1.09 to 0.67, which improves the mismatching ability of binocular depth estimation algorithm in ill-posed regions.
Funder
National Natural Science Foundation of China Liaoning Province
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference33 articles.
1. Marr, D. (1983). Vision a Computational Investigation into the Human Representation and Processing of Visual Information, MIT Press. 2. Robust Control Algorithm of Bionic Robot Based on Binocular Vision Navigation;Liu;Comput. Sci.,2017 3. Trzcinski, T., Christoudias, M., Fua, P., and Lepetit, V. (2013, January 23–28). Boosting Binary Keypoint Descriptors. Computer Vision and Pattern Recognition. Proceedings of the CVPR–2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA. 4. Backpropagation Applied to Handwritten Zip Code Recognition;LeCun;Neural Comput.,1989 5. Mayer, N., Ilg, E., Hausser, P., Fischer, P., Cremers, D., Dosovitskiy, A., and Brox, T. (2016, January 27–30). A Large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.
|
|