Abstract
Autonomous robot navigation has become a crucial concept in industrial development for minimizing manual tasks. Most of the existing robot navigation systems are based on the perceived geometrical features of the environment, with the employment of sensory devices including laser scanners, video cameras, and microwave radars to build the environment structure. However, scene understanding is a significant issue in the development of robots that can be controlled autonomously. The semantic model of the indoor environment offers the robot a representation closer to the human perception, and this enhances navigation tasks and human–robot interaction. In this paper, we propose a low-cost and low-memory framework that offers an improved representation of the environment using semantic information based on LiDAR sensory data. The output of the proposed work is a reliable classification system for indoor environments with an efficient classification accuracy of 97.21% using the collected dataset.
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference28 articles.
1. On the Representation and Estimation of Spatial Uncertainty
2. Simultaneous map building and localization for an autonomous mobile robot;Leonard;IROS,1991
3. Towards semantic navigation in mobile robotics;Borkowski,2010
4. Semantic Labeling of Places with Mobile Robots;Mozos,2010
5. Semantic Information for Robot Navigation: A Survey
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献