Affiliation:
1. Z-one Technology co., Ltd.
Abstract
<div class="section abstract"><div class="htmlview paragraph">In the field of autonomous driving, in order to guarantee - robust perception
performance at night and to reduce cost of data collection and annotation, there
are many day-to-night image translation methods based on Generative Adversarial
Networks (GAN) to generate realistic synthetic data. The vehicle light effect is
of great significance to the perception task (such as vehicle detection) in the
night scene. However, no research has ever focused on the problem of the vehicle
light effect in day-to-night image translation. Therefore, we propose an
end-to-end day-to-night image translation system based on the local controllable
vehicle light effect, which mainly consists of two modules. Module A adopts
YOLOv7 for 2.5D vehicle detection and traditional image processing algorithms to
obtain the semantic mask of vehicle head/tail lights. Module B adopts a GAN for
day-to-night image transformation with the local controllable vehicle light
effect. In module B, we propose a Two-Stream UHRNET (TSUH) generator that uses a
day image from the source domain, a night image from the target domain and the
corresponding vehicle lights semantic mask from module A to generate a
photorealistic night image with the same content as the day image and similar
style as the night image, and the light effect in the specified vehicle light
region in the image. When training our GAN model, considering the large
difference in the distribution of vehicle sizes in the image dataset, in order
to generate natural and realistic vehicle and light effects in the generated
night image, we proposed a vehicle patch loss based on vehicle detection
bounding boxes in order to generate natural and realistic vehicle and light
effects in the generated night image. The experimental results show that our
system can achieve global day-to-night image transformation while performing
local vehicle light effect control.</div></div>