Abstract
AbstractThis paper proposes a fully automated generative network (“SynFAGnet”) for automatically creating a realistic-looking synthetic fire image. SynFAGnet is used as a data augmentation technique to create diverse data for training models, thereby solving problems related to real data acquisition and data imbalances. SynFAGnet comprises two main parts: an object-scene placement net (OSPNet) and a local–global context-based generative adversarial network (LGC-GAN). The OSPNet identifies suitable positions and scales for fires corresponding to the background scene. The LGC-GAN enhances the realistic appearance of synthetic fire images created by a given fire object-background scene pair by assembling effects such as halos and reflections in the surrounding area in the background scene. A comparative analysis shows that SynFAGnet achieves better outcomes than previous studies for both the Fréchet inception distance and learned perceptual image patch similarity evaluation metrics (values of 17.232 and 0.077, respectively). In addition, SynFAGnet is verified as a practically applicable data augmentation technique for training datasets, as it improves the detection and instance segmentation performance.
Funder
National Research Foundation of Korea
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献