Abstract
Abstract
Moving object segmentation is fundamental for various downstream tasks in robotics and autonomous driving, providing crucial information for them. Effectively extracting spatial-temporal information from consecutive frames and addressing the scarcity of dataset is important for learning-based 3D LiDAR moving object segmentation (LIDAR-MOS). In this work, we propose a novel deep neural network based on vision transformers (ViTs) to tackle this problem. We first validate the feasibility of transformer networks for this task, offering an alternative to CNNs. Specifically, we utilize a dual-branch structure using range (residual) image as input to extract spatial-temporal information from consecutive frames and fuse it using a motion-guided attention mechanism. Furthermore, we employ the ViT as the backbone, keeping its architecture unchanged from what is used for RGB images. This enables us to leverage pre-trained models on RGB images to improve results, addressing the issue of limited LiDAR point cloud data, which is cheaper compared to acquiring and annotating point cloud data. We validate the effectiveness of our approach on the LIDAR-MOS benchmark of SemanticKitti and achieve comparable results to methods that use CNNs on range image data. The source code and trained models will be available at https://github.com/mafangniu/MOSViT.git.
Funder
Research and Development Program of China