Abstract
Although pedestrian detection techniques are improving, this task is still challenging due to the problems of target occlusion, small targets, and complex pedestrian backgrounds in images of different scenes. As a result, the You Only Look Once (YOLO) algorithm exhibits lower detection accuracy. In this paper, the use of multiple dilated convolutions to sample feature images is proposed avoid the information loss incurred repeated sampling, to improve the feature extraction and target detection performance of the algorithm. In addition, a lightweight shuffle-based efficient channel attention (SECA) mechanism is introduced to conduct grouping in the channel dimension and perform parallel processing for each subfeature map channel. A new branch is introduced to enrich the channel feature information for multiscale feature representation. Finally, a distance intersection over union-based nonmaximum suppression (DIoU-NMS) method is introduced to minimize the occurrence of missed targets due to occlusion by taking the prediction box and ground truth box centroid locations information into account without increasing the computational cost over that of normal NMS. Our method is extensively evaluated on several challenging pedestrian detection datasets, achieving 87.73%, 34.7%, 93.96% and 95.23% mean average precision (mAP) values on PASCAL VOC 2012, MS COCO, Caltech Pedestrian and INRIA Person, which are respectively. The experimental results demonstrate the effectiveness of the method.