Author:
Battrawy Ramy,Schuster René,Stricker Didier
Abstract
AbstractThe proposed RMS-FlowNet++ is a novel end-to-end learning-based architecture for accurate and efficient scene flow estimation that can operate on high-density point clouds. For hierarchical scene flow estimation, existing methods rely on expensive Farthest-Point-Sampling (FPS) to sample the scenes, must find large correspondence sets across the consecutive frames and/or must search for correspondences at a full input resolution. While this can improve the accuracy, it reduces the overall efficiency of these methods and limits their ability to handle large numbers of points due to memory requirements. In contrast to these methods, our architecture is based on an efficient design for hierarchical prediction of multi-scale scene flow. To this end, we develop a special flow embedding block that has two advantages over the current methods: First, a smaller correspondence set is used, and second, the use of Random-Sampling (RS) is possible. In addition, our architecture does not need to search for correspondences at a full input resolution. Exhibiting high accuracy, our RMS-FlowNet++ provides a faster prediction than state-of-the-art methods, avoids high memory requirements and enables efficient scene flow on dense point clouds of more than 250K points at once. Our comprehensive experiments verify the accuracy of RMS-FlowNet++ on the established FlyingThings3D data set with different point cloud densities and validate our design choices. Furthermore, we demonstrate that our model has a competitive ability to generalize to the real-world scenes of the KITTI data set without fine-tuning.
Funder
Federal Ministry of Education and Research Germany under Decode Project
Photonics Research Germany under FUMOS project
Publisher
Springer Science and Business Media LLC
Reference66 articles.
1. Battrawy, R., Schuster, R.é, Mahani, M.-N., & Stricker, D. (2022). RMS-FlowNet: Efficient and Robust Multi-Scale Scene Flow Estimation for Large-Scale Point Clouds. In IEEE International Conference on Robotics and Automation (ICRA).
2. Battrawy, R., Schuster, R.é, Wasenmüller, O., Rao, Q., & Stricker, D. (2019). LiDAR-Flow: Dense Scene Flow Estimation from Sparse LiDAR and Stereo Images. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
3. Behl, A., Paschalidou, D., Donné, S., & Geiger, A. (2019). PointFlowNet: Learning Representations for Rigid Motion Estimation from Point Clouds. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
4. Blanco, J. L., & Rai, P. K. (2014). Nanoflann: A C++ header-only fork of FLANN, a library for nearest neighbor (NN) with kd-trees. https://github.com/jlblancoc/nanoflann
5. Chen, Y., Van Gool, L., Schmid, C. & Sminchisescu, C. (2020). Consistency Guided Scene Flow Estimation. In European Conference on Computer Vision (ECCV).