Abstract
Abstract
Deep learning methods have made remarkable strides in surface defect detection. But, they heavily rely on large amount of training data, which can be a costly endeavor, especially for specific applications like steel strip surface defect detection, where acquiring and labeling large-scale data is impractical due to the rarity of certain defective categories in production environment. Hence, realistic defect image synthesis can greatly alleviate this issue. However, training image generation networks also demand substantial data, making image data augmentation merely an auxiliary effort. In this work, we propose a Generative Adversarial Network (GAN)-based image synthesis framework. We selectively extract the defect edges of the original image as well as the background texture information, and use them as network input through the spatially-adaptive (de)normalization (SPADE) module. This enriches the input information, thus significantly reducing the amount of training data for GAN network in image generation, and enhancing the background details as well as the defect boundaries in the generated images. Additionally, we introduce a novel generator loss term that balances the similarity and perceptual fidelity between synthetic and real images by constraining high-level features at different feature levels. This provides more valuable information for data augmentation in training object detection models using synthetic images. Our experimental results demonstrate the sophistication of the proposed image synthesis method and its effectiveness in data augmentation for steel strip surface defect detection tasks.
Funder
National Natural Science Foundation of China
National Natural Science Foundation of China and the Royal Society of Edinburgh
Guangdong Basic and Applied Basic Research Foundation
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献