Affiliation:
1. School of computer science, The University of Sydney, Camperdown NSW 2050, Australia.
2. Department of Computer Science and Engineering, Sri Eshwar College of Engineering, Coimbatore, India.
Abstract
Robotic perception systems often include approaches that can extract valuable features or information from the studied dataset. These methods often involve the application of deep learning approaches, such as convolutional neural networks (CNNs), for processing of images, as well as the incorporation of 3D data. The notion of image categorization is well delineated via the use of networks that include convolutional networks. However, some network topologies exhibit a substantial scope and need significant amounts of time and memory resources. On the other hand, the neural networks FlowNet3D and PointFlowNet have the capability to accurately predict scene flow. Specifically, these networks are capable of estimating the three-dimensional movements of point clouds (PCs) within a dynamic environment. When using PCs in robotic applications, it is crucial to examine the robustness of accurately recognizing the points that belong to the object. This article examines the use of robotic perception systems inside autonomous vehicles and the inherent difficulties linked to the analysis and processing of information obtained from diverse sensors. The researchers put out a late fusion methodology that integrates the results of many classifiers in order to enhance the accuracy of categorization. Additionally, the authors propose a weighted fusion technique that incorporates the proximity to objects as a significant factor. The findings indicate that the fusion methods described in this study exhibit superior performance compared to both single modality classification and classic fusion strategies.
Reference25 articles.
1. A. Elfes, “Using occupancy grids for mobile robot perception and navigation,” IEEE Computer, vol. 22, no. 6, pp. 46–57, Jun. 1989, doi: 10.1109/2.30720.
2. R. Siegwart, I. Nourbakhsh, and D. Scaramuzza, “Introduction to autonomous mobile robots,” Choice Reviews Online, vol. 49, no. 03, pp. 49–1492, Nov. 2011, doi: 10.5860/choice.49-1492.
3. I. Kim et al., “Nanophotonics for light detection and ranging technology,” Nature Nanotechnology, vol. 16, no. 5, pp. 508–524, May 2021, doi: 10.1038/s41565-021-00895-3.
4. B. W. Parkinson and J. J. Spilker, Global positioning system : theory and applications. 1996, p. 114. [Online]. Available: https://arc.aiaa.org/doi/pdf/10.2514/5.9781600866388.0000.0000
5. M. Sun, Z. Zhao, and X. Ma, “Sensing and Handling Engagement Dynamics in Human-Robot Interaction Involving Peripheral Computing Devices,” 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), May 2017, doi: 10.1145/3025453.3025469.