6D Object Pose Estimation Based on Cross-Modality Feature Fusion
Author:
Jiang Meng1, Zhang Liming1, Wang Xiaohua1, Li Shuang1, Jiao Yijie1
Affiliation:
1. School of Electronic Information, Xi’an Polytechnic University, Xi’an 710048, China
Abstract
The 6D pose estimation using RGBD images plays a pivotal role in robotics applications. At present, after obtaining the RGB and depth modality information, most methods directly concatenate them without considering information interactions. This leads to the low accuracy of 6D pose estimation in occlusion and illumination changes. To solve this problem, we propose a new method to fuse RGB and depth modality features. Our method effectively uses individual information contained within each RGBD image modality and fully integrates cross-modality interactive information. Specifically, we transform depth images into point clouds, applying the PointNet++ network to extract point cloud features; RGB image features are extracted by CNNs and attention mechanisms are added to obtain context information within the single modality; then, we propose a cross-modality feature fusion module (CFFM) to obtain the cross-modality information, and introduce a feature contribution weight training module (CWTM) to allocate the different contributions of the two modalities to the target task. Finally, the result of 6D object pose estimation is obtained by the final cross-modality fusion feature. By enabling information interactions within and between modalities, the integration of the two modalities is maximized. Furthermore, considering the contribution of each modality enhances the overall robustness of the model. Our experiments indicate that the accuracy rate of our method on the LineMOD dataset can reach 96.9%, on average, using the ADD (-S) metric, while on the YCB-Video dataset, it can reach 94.7% using the ADD-S AUC metric and 96.5% using the ADD-S score (<2 cm) metric.
Funder
Natural Science Basic Research Program of Shaanxi Key Research and Development plan of Shaanxi province China Graduate Scientific Innovation Fund for Xi’an Polytechnic University Key Research and Development program of Shaanxi province Xi’an Beilin District science and technology project
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference43 articles.
1. Brachmann, E., Krull, A., Michel, F., Gumhold, S., Shotton, J., and Rother, C. (2014, January 6–12). Learning 6d object pose estimation using 3d object coordinates. Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland. Proceedings, Part II 13. 2. Pose estimation for augmented reality: A hands-on survey;Marchand;IEEE Trans. Vis. Comput. Graph.,2015 3. Real-time RGB-D camera pose estimation in novel scenes using a relocalisation cascade;Cavallari;IEEE Trans. Pattern Anal. Mach. Intell.,2019 4. Stoiber, M., Elsayed, M., Reichert, A.E., Steidle, F., Lee, D., and Triebel, R. (2023). Fusing Visual Appearance and Geometry for Multi-modality 6DoF Object Tracking. arXiv. 5. Yu, J., Weng, K., Liang, G., and Xie, G. (2013, January 12–14). A vision-based robotic grasping system using deep learning for 3D object recognition and pose estimation. Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China.
|
|