Author:
Feng Mingchi,Liu Yibo,Jiang Panpan,Wang Jingshu
Abstract
Abstract
Environment perception based on vision plays an important role in autonomous driving technology. Although vision perception has achieved certain results in recent years, many methods can not solve the contradiction between speed and precision. In this paper, we propose a system for fast and accurate object detection and localization based on binocular vision. For object detection, a neural network model based on YOLOv3 is proposed. Specifically, MobileNet is employed in the backbone of YOLOv3 to improve the speed of feature extraction. Then the corresponding ORB feature points are extracted from continuous stereo images which take from the binocular cameras on the moving car. Thus, the disparity of each ORB feature point is calculated. After that, we use the result of object detection to screen the ORB feature points. Finally, the depth of the targets in the traffic scene can be estimated. Experiments on the KITTI dataset show the efficacy of our system, as well as the accuracy and robustness of our object localization relative to ground truth and prior works.
Subject
General Physics and Astronomy
Reference10 articles.
1. Rt3d: Real-time 3-d vehicle detection in lidar point cloud for autonomous driving;Zeng;IEEE Robotics and Automation Letters,2018
2. On-road vehicle detection and tracking using MMW radar and monovision fusion;Wang;IEEE Transactions on Intelligent Transportation Systems,2016
3. SSD: Single shot multibox detector;Liu,2016
4. Gaussian yolov3: An accurate and fast object detector using localization uncertainty for autonomous driving;Choi,2019
5. You only look once: Unified, real-time object detection;Redmon,2016
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献