MEGF-Net: multi-exposure generation and fusion network for vehicle detection under dim light conditions

Author:

Du BoyangORCID,Du CongjuORCID,Yu LiORCID

Abstract

AbstractVehicle detection in dim light has always been a challenging task. In addition to the unavoidable noise, the uneven spatial distribution of light and dark due to vehicle lights and street lamps can further make the problem more difficult. Conventional image enhancement methods may produce over smoothing or over exposure problems, causing irreversible information loss to the vehicle targets to be subsequently detected. Therefore, we propose a multi-exposure generation and fusion network. In the multi-exposure generation network, we employ a single gated convolutional recurrent network with two-stream progressive exposure input to generate intermediate images with gradually increasing exposure, which are provided to the multi-exposure fusion network after a spatial attention mechanism. Then, a pre-trained vehicle detection model in normal light is used as the basis of the fusion network, and the two models are connected using the convolutional kernel channel dimension expansion technique. This allows the fusion module to provide vehicle detection information, which can be used to guide the generation network to fine-tune the parameters and thus complete end-to-end enhancement and training. By coupling the two parts, we can achieve detail interaction and feature fusion under different lighting conditions. Our experimental results demonstrate that our proposed method is better than the state-of-the-art detection methods after image luminance enhancement on the ODDS dataset.

Funder

Science and Technology Innovation foundation

Publisher

Springer Science and Business Media LLC

Reference38 articles.

1. Xiao, J., Cheng, H., Sawhney, H. S., & Han, F. (2010). Vehicle detection and tracking in wide field-of-view aerial video. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 679–684). Piscataway: IEEE.

2. Yuan, M., Wang, Y., & Wei, X. (2022). Translation, scale and rotation: cross-modal alignment meets RGB-infrared vehicle detection. In S. Avidan, J. B. Brostow, M. Ciss, et al.(Eds.), Proceedings of the 17th European conference on computer vision (pp. 509–525). Cham: Springer.

3. Yayla, R., & Albayrak, E. (2022). Vehicle detection from unmanned aerial images with deep mask R-CNN. Computer Science Journal of Moldova, 30(2), 148–169.

4. Charouh, Z., Ezzouhri, A., Ghogho, M., & Guennoun, Z. (2022). A resource-efficient CNN-based method for moving vehicle detection. Sensors, 22(3), 1193.

5. Liao, B., He, H., Du, Y., & Guan, S. (2022). Multi-component vehicle type recognition using adapted CNN by optimal transport. Signal, Image and Video Processing, 16(4), 975–982.

Cited by 4 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. EiffHDR: An Efficient Network for Multi-Exposure High Dynamic Range Imaging;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14

2. Efficient Content Reconstruction for High Dynamic Range Imaging;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14

3. A No-Reference Stereoscopic Image Quality Assessment Based on Cartoon Texture Decomposition and Human Visual System;Communications in Computer and Information Science;2024

4. Image Aesthetics Assessment Based on Visual Perception and Textual Semantic Understanding;Communications in Computer and Information Science;2024

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3