Author:
Dai Yingpeng,Wang Junzheng,Li Jiehao,Li Jing
Abstract
Purpose
This paper aims to focus on the environmental perception of unmanned platform under complex street scenes. Unmanned platform has a strict requirement both on accuracy and inference speed. So how to make a trade-off between accuracy and inference speed during the extraction of environmental information becomes a challenge.
Design/methodology/approach
In this paper, a novel multi-scale depth-wise residual (MDR) module is proposed. This module makes full use of depth-wise separable convolution, dilated convolution and 1-dimensional (1-D) convolution, which is able to extract local information and contextual information jointly while keeping this module small-scale and shallow. Then, based on MDR module, a novel network named multi-scale depth-wise residual network (MDRNet) is designed for fast semantic segmentation. This network could extract multi-scale information and maintain feature maps with high spatial resolution to mitigate the existence of objects at multiple scales.
Findings
Experiments on Camvid data set and Cityscapes data set reveal that the proposed MDRNet produces competitive results both in terms of computational time and accuracy during inference. Specially, the authors got 67.47 and 68.7% Mean Intersection over Union (MIoU) on Camvid data set and Cityscapes data set, respectively, with only 0.84 million parameters and quicker speed on a single GTX 1070Ti card.
Originality/value
This research can provide the theoretical and engineering basis for environmental perception on the unmanned platform. In addition, it provides environmental information to support the subsequent works.
Subject
Industrial and Manufacturing Engineering,Control and Systems Engineering
Reference51 articles.
1. Segnet: a deep convolutional encoder-decoder architecture for image segmentation;IEEE Transactions on Pattern Analysis and Machine Intelligence,2017
2. Semantic object classes in video: a high-definition ground truth database;Pattern Recognition Letters,2009
3. Segmentation and recognition using structure from motion point clouds,2008
4. Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs;IEEE Transactions on Pattern Analysis and Machine Intelligence,2017
5. Encoder-decoder with atrous separable convolution for semantic image segmentation,2018
Cited by
26 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献