Deep Learning-Based 6-DoF Object Pose Estimation Considering Synthetic Dataset
Author:
Zheng Tianyu1, Zhang Chunyan1, Zhang Shengwen1, Wang Yanyan1
Affiliation:
1. School of Mechanical Engineer, Jiangsu University of Science and Technology, Zhenjiang 212100, China
Abstract
Due to the difficulty in generating a 6-Degree-of-Freedom (6-DoF) object pose estimation dataset, and the existence of domain gaps between synthetic and real data, existing pose estimation methods face challenges in improving accuracy and generalization. This paper proposes a methodology that employs higher quality datasets and deep learning-based methods to reduce the problem of domain gaps between synthetic and real data and enhance the accuracy of pose estimation. The high-quality dataset is obtained from Blenderproc and it is innovatively processed using bilateral filtering to reduce the gap. A novel attention-based mask region-based convolutional neural network (R-CNN) is proposed to reduce the computation cost and improve the model detection accuracy. Meanwhile, an improved feature pyramidal network (iFPN) is achieved by adding a layer of bottom-up paths to extract the internalization of features of the underlying layer. Consequently, a novel convolutional block attention module–convolutional denoising autoencoder (CBAM–CDAE) network is proposed by presenting channel attention and spatial attention mechanisms to improve the ability of AE to extract images’ features. Finally, an accurate 6-DoF object pose is obtained through pose refinement. The proposed approach is compared to other models using the T-LESS and LineMOD datasets. Comparison results demonstrate the proposed approach outperforms the other estimation models.
Funder
Postgraduate Research & Practice Innovation Program of Jiangsu Province
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference44 articles.
1. Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27–30). You Only Look Once: Unified, Real-Time Object Detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA. 2. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., and Berg, A.C. (2016, January 11–14). Ssd: Single Shot Multibox Detector. Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands. 3. Faster r-cnn: Towards Real-Time Object Detection with Region Proposal Networks;Ren;IEEE Trans. Pattern Anal. Mach. Intell.,2015 4. Bolya, D., Zhou, C., Xiao, F., and Lee, Y.J. (November, January 27). Yolact: Real-Time Instance Segmentation. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea. 5. Lin, T.-Y., Goyal, P., Girshick, R., He, K., and Dollár, P. (2017, January 22–29). Focal Loss for Dense Object Detection. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.
|
|