FusionVision: A Comprehensive Approach of 3D Object Reconstruction and Segmentation from RGB-D Cameras Using YOLO and Fast Segment Anything
Author:
El Ghazouali Safouane1ORCID, Mhirit Youssef2, Oukhrid Ali3, Michelucci Umberto1ORCID, Nouira Hichem4ORCID
Affiliation:
1. TOELT LLC, AI Lab, 8406 Winterthur, Switzerland 2. Independent Researcher, 75000 Paris, France 3. Independent Researcher, 2502 Biel/Bienne, Switzerland 4. LNE Laboratoire National de Metrologie et d’Essaies, 75015 Paris, France
Abstract
In the realm of computer vision, the integration of advanced techniques into the pre-processing of RGB-D camera inputs poses a significant challenge, given the inherent complexities arising from diverse environmental conditions and varying object appearances. Therefore, this paper introduces FusionVision, an exhaustive pipeline adapted for the robust 3D segmentation of objects in RGB-D imagery. Traditional computer vision systems face limitations in simultaneously capturing precise object boundaries and achieving high-precision object detection on depth maps, as they are mainly proposed for RGB cameras. To address this challenge, FusionVision adopts an integrated approach by merging state-of-the-art object detection techniques, with advanced instance segmentation methods. The integration of these components enables a holistic (unified analysis of information obtained from both color RGB and depth D channels) interpretation of RGB-D data, facilitating the extraction of comprehensive and accurate object information in order to improve post-processes such as object 6D pose estimation, Simultanious Localization and Mapping (SLAM) operations, accurate 3D dataset extraction, etc. The proposed FusionVision pipeline employs YOLO for identifying objects within the RGB image domain. Subsequently, FastSAM, an innovative semantic segmentation model, is applied to delineate object boundaries, yielding refined segmentation masks. The synergy between these components and their integration into 3D scene understanding ensures a cohesive fusion of object detection and segmentation, enhancing overall precision in 3D object segmentation.
Reference64 articles.
1. Robotic Online Path Planning on Point Cloud;Liu;IEEE Trans. Cybern.,2016 2. Ding, Z., Sun, Y., Xu, S., Pan, Y., Peng, Y., and Mao, Z. (2023). Recent Advances and Perspectives in Deep Learning Techniques for 3D Point Cloud Data Processing. Robotics, 12. 3. Segmentation of 3D Point Cloud Data Representing Full Human Body Geometry: A Review;Krawczyk;Pattern Recognit.,2023 4. Wu, F., Qian, Y., Zheng, H., Zhang, Y., and Zheng, X. (September, January 28). A Novel Neighbor Aggregation Function for Medical Point Cloud Analysis. Proceedings of the Computer Graphics International Conference, Shanghai, China. 5. Xie, X., Wei, H., and Yang, Y. (2023). Real-Time LiDAR Point-Cloud Moving Object Segmentation for Autonomous Driving. Sensors, 23.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|