DCGAN-Based Image Data Augmentation in Rawhide Stick Products’ Defect Detection
-
Published:2024-05-24
Issue:11
Volume:13
Page:2047
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Ding Shuhui1, Guo Zhongyuan1, Chen Xiaolong1, Li Xueyi1ORCID, Ma Fai2
Affiliation:
1. College of Mechanical and Electronic Engineering, Shandong University of Science and Technology, Qingdao 266590, China 2. Department of Mechanical Engineering, University of California, Berkeley, CA 94709, USA
Abstract
The online detection of surface defects in irregularly shaped products such as rawhide sticks, a kind of pet food, is still a challenge for the food industry. Developing deep learning-based detection algorithms requires a diverse defect database, which is crucial for artificial intelligence applications. Acquiring a sufficient amount of realistic defect data is challenging, especially during the beginning of product production, due to the occasional nature of defects and the associated costs. Herein, we present a novel image data augmentation method, which is used to generate a sufficient number of defect images. A Deep Convolution Generation Adversarial Network (DCGAN) model based on a Residual Block (ResB) and Hybrid Attention Mechanism (HAM) is proposed to generate massive defect images for the training of deep learning models. Based on a DCGAN, a ResB and a HAM are utilized as the generator and discriminator in a deep learning model. The Wasserstein distance with a gradient penalty is used to calculate the loss function so as to update the model training parameters and improve the quality of the generated image and the stability of the model by extracting deep image features and strengthening the important feature information. The approach is validated by generating enhanced defect image data and conducting a comparison with other methods, such as a DCGAN and WGAN-GP, on a rawhide stick experimental dataset.
Funder
National Natural Science Foundation of China
Reference40 articles.
1. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014, January 8–13). Generative adversarial nets. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada. 2. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. (2017, January 4–9). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA. 3. Radford, A., Metz, L., and Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. Comput. Sci., 3–5. 4. Ratliff, L.J., Burden, S.A., and Sastry, S.S. (2013, January 2–4). Characterization and computation of local Nash equilibria in continuous games. Proceedings of the 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA. 5. Mao, X., Li, Q., Xie, H., Lau, R., Wang, Z., and Smolley, S.P. (2017, January 22–29). Least squares generative adversarial networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.
|
|