Abstract
In response to the growing inspection demand exerted by process automation in component manufacturing, Non-destructive testing (NDT) continues to explore automated approaches that utilize deep learning algorithms for defect identification, including within digital X-ray radiography images. This necessitates a thorough understanding of the implication of image quality parameters on the performance of these deep learning models. This study investigates the influence of two image quality parameters, namely Signal-to-Noise Ratio (SNR) and Contrast-to-Noise Ratio (CNR), on the performance of U-net deep learning segmentation model. Input images were acquired with varying combinations of exposure factors such as kilovoltage, milli-ampere, and exposure time, which altered the resultant quality. The data was sorted into 5 different datasets according to their measured SNR and CNR values. The deep learning model was trained 5 distinct times, utilizing a unique dataset for each training session. Training the model with high CNR values yielded an intersection over Union (IoU) metric of 0.9594 on test data of the same category but drops to 0.5875 when tested on lower CNR test data. The result in this study emphasizes the importance of achieving a balance in training dataset according to the investigated quality parameters, to enhance the performance of deep learning segmentation models in NDT radiography applications.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献