A Pose Estimation Algorithm for Multimodal Data Fusion
Author:
Chen Ning,Wu Shaopeng,Chen Yupeng,Wang Zhanghua,Zhang Ziqian
Abstract
In response to the problem that the previous pose detection systems are not effective under conditions such as severe occlusion or uneven illumination, this paper focuses on the multimodal information fusion pose estimation problem. The main work is to design a multimodal data fusion pose estimation algorithm for the problem of pose estimation in complex scenes such as low-texture targets and poor lighting conditions. The network takes images and point clouds as input and extracts local color and spatial features of the target object using the improved DenseNet and PointNet++ networks, which are combined with a microscopic bit-pose iterative network to achieve end-to-end bit-pose estimation. Excellent detection accuracy was obtained on two benchmark datasets of LineMOD (97.8%) and YCB-Video (95.3%) for pose estimation. The algorithm is able to obtain accurate poses of target objects from complex scenes, providing accurate, real-time and robust relative poses for object tracking in motion and wave compensation.
Funder
Fujian Province Natural Science Foundation
Jimei University National Natural Science Foundation Incubation Program
Publisher
International Information and Engineering Technology Association
Subject
Electrical and Electronic Engineering