End-to-End Object Detection with Enhanced Positive Sample Filter
-
Published:2023-01-17
Issue:3
Volume:13
Page:1232
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Song Xiaolin,Chen Binghui,Li Pengyu,Wang Biao,Zhang Honggang
Abstract
Discarding Non-Maximum Suppression (NMS) post-processing and realizing fully end-to-end object detection is a recent research focus. Previous works have proved that the one-to-one label assignment strategy provides the chance to eliminate NMS during inference. However, this strategy might also result in multiple predictions with high scores due to the inconsistency of label assignment during training. Thus, how to adaptively identify only one positive sample as a final prediction for each Ground-Truth instance remains important. In this paper, we propose an Enhanced Positive Sample Filter (EPSF) to filter out the single positive sample for each Ground-Truth instance and lower the confidence of other negative samples. This is mainly achieved with two components: a Dual-stream Feature Enhancement module (DsFE) and a Disentangled Max Pooling Filter (DeMF). DsFE makes full use of representations trained with different targets so as to provide rich information clues for positive sample selection, while DeMF enhances the feature discriminability in potential foreground regions with disentangled pooling. With the proposed methods, our end-to-end detector achieves a better performances against existing NMS-free object detectors on COCO, PASCAL VOC, CrowdHuman and Caltech datasets.
Funder
National Natural Science Foundation of China
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference46 articles.
1. Ren, S., He, K., Girshick, R., and Sun, J. (2015, January 7–12). Faster r-cnn: Towards real-time object detection with region proposal networks. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada. 2. Lin, T.Y., Goyal, P., Girshick, R., He, K., and Dollár, P. (2017, January 22–29). Focal loss for dense object detection. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy. 3. Tian, Z., Shen, C., Chen, H., and He, T. (November, January 27). Fcos: Fully convolutional one-stage object detection. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea. 4. Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27–30). You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA. 5. Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J. (2017, January 21–26). Pyramid scene parsing network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Optimal Proposal Learning for Deployable End-to-End Pedestrian Detection;2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR);2023-06
|
|