Visual Navigation Based on Semantic Segmentation Using Only a Monocular Camera as an External Sensor
-
Published:2020-12-20
Issue:6
Volume:32
Page:1137-1153
-
ISSN:1883-8049
-
Container-title:Journal of Robotics and Mechatronics
-
language:en
-
Short-container-title:J. Robot. Mechatron.
Author:
Miyamoto Ryusuke,Adachi Miho,Ishida Hiroki,Watanabe Takuto,Matsutani Kouchi,Komatsuzaki Hayato,Sakata Shogo,Yokota Raimu,Kobayashi Shingo, ,
Abstract
The most popular external sensor for robots capable of autonomous movement is 3D LiDAR. However, cameras are typically installed on robots that operate in environments where humans live their daily lives to obtain the same information that is presented to humans, even though autonomous movement itself can be performed using only 3D LiDAR. The number of studies on autonomous movement for robots using only visual sensors is relatively small, but this type of approach is effective at reducing the cost of sensing devices per robot. To reduce the number of external sensors required for autonomous movement, this paper proposes a novel visual navigation scheme using only a monocular camera as an external sensor. The key concept of the proposed scheme is to select a target point in an input image toward which a robot can move based on the results of semantic segmentation, where road following and obstacle avoidance are performed simultaneously. Additionally, a novel scheme called virtual LiDAR is proposed based on the results of semantic segmentation to estimate the orientation of a robot relative to the current path in a traversable area. Experiments conducted during the course of the Tsukuba Challenge 2019 demonstrated that a robot can operate in a real environment containing several obstacles, such as humans and other robots, if correct results of semantic segmentation are provided.
Publisher
Fuji Technology Press Ltd.
Subject
Electrical and Electronic Engineering,General Computer Science
Reference40 articles.
1. J. D. Crisman and C. E. Thorpe, “Color Vision For Road Following,” Proc. SPIE Conf. on Mobile Robots, Vol.1007, pp. 175-185, doi: 10.1117/12.949096, 1989. 2. T. Kanade, C. Thorpe, and W. Whittaker, “Autonomous Land Vehicle Project at CMU,” Proc. ACM Fourteenth Annual Conf. on Computer Science, pp. 71-80, doi: 10.1145/324634.325197, 1986. 3. R. Wallace, K. Matsuzaki, Y. Goto, J. Crisman, J. Webb, and T. Kanade, “Progress in robot road-following,” Proc. IEEE Int. Conf. on Robotics and Automation, Vol.3, pp. 1615-1621, doi: 10.1109/ROBOT.1986.1087503, 1986. 4. R. S. Wallace, A. Stentz, C. E. Thorpe, H. P. Moravec, W. Whittaker, and T. Kanade, “First results in robot road-following,” Proc. Int. Joint Conf. on Artificial Intelligence, pp. 1089-1095, 1985. 5. F. Beruny and J. R. d. Solar, “Topological Semantic Mapping and Localization in Urban Road Scenarios,” J. Intell. Robotics Syst., Vol.92, No.1, pp. 19-32, doi: 10.1007/s10846-017-0744-x, 2018.
Cited by
23 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|