DyCC-Net: Dynamic Context Collection Network for Input-Aware Drone-View Object Detection
-
Published:2022-12-13
Issue:24
Volume:14
Page:6313
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Xi Yue, Jia WenjingORCID, Miao QiguangORCID, Liu XiangzengORCID, Fan XiaochenORCID, Lou Jian
Abstract
Benefiting from the advancement of deep neural networks (DNNs), detecting objects from drone-view images has achieved great success in recent years. It is a very challenging task to deploy such DNN-based detectors on drones in real-life applications due to their excessive computational costs and limited onboard computational resources. Large redundant computation exists because existing drone-view detectors infer all inputs with nearly identical computation. Detectors with less complexity can be sufficient for a large portion of inputs, which contain a small number of sparse distributed large-size objects. Therefore, a drone-view detector supporting input-aware inference, i.e., capable of dynamically adapting its architecture to different inputs, is highly desirable. In this work, we present a Dynamic Context Collection Network (DyCC-Net), which can perform input-aware inference by dynamically adapting its structure to inputs of different levels of complexities. DyCC-Net can significantly improve inference efficiency by skipping or executing a context collector conditioned on the complexity of the input images. Furthermore, since the weakly supervised learning strategy for computational resource allocation lacks of supervision, models may execute the computationally-expensive context collector even for easy images to minimize the detection loss. We present a Pseudo-label-based semi-supervised Learning strategy (Pseudo Learning), which uses automatically generated pseudo labels as supervision signals, to determine whether to perform context collector according to the input. Extensive experiment results on VisDrone2021 and UAVDT, show that our DyCC-Net can detect objects in drone-captured images efficiently. The proposed DyCC-Net reduces the inference time of state-of-the-art (SOTA) drone-view detectors by over 30 percent, and DyCC-Net outperforms them by 1.94% in AP75.
Funder
Key R & D Projects of Qingdao Science and Technology Plan Fundamental Research Funds for the Central Universities
Subject
General Earth and Planetary Sciences
Reference56 articles.
1. Avola, D., Cinque, L., Diko, A., Fagioli, A., Foresti, G.L., Mecca, A., Pannone, D., and Piciarelli, C. (2021). MS-Faster R-CNN: Multi-stream backbone for improved Faster R-CNN object detection and aerial tracking from UAV images. Remote Sens., 13. 2. Stojnić, V., Risojević, V., Muštra, M., Jovanović, V., Filipi, J., Kezić, N., and Babić, Z. (2021). A method for detection of small moving objects in UAV videos. Remote Sens., 13. 3. Anomaly Detection in Aerial Videos with Transformers;Jin;IEEE Trans. Geosci. Remote Sens.,2022 4. Moon, J., Lim, S., Lee, H., Yu, S., and Lee, K.B. (2022). Smart Count System Based on Object Detection Using Deep Learning. Remote Sens., 14. 5. Yang, X., Yan, J., Liao, W., Yang, X., Tang, J., and He, T. (2022). Scrdet++: Detecting small, cluttered and rotated objects via instance-level feature denoising and rotation loss smoothing. IEEE Trans. Pattern Anal. Mach. Intell.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|