Adversarial Attacks on Medical Segmentation Model via Transformation of Feature Statistics
-
Published:2024-03-19
Issue:6
Volume:14
Page:2576
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Lee Woonghee1ORCID, Ju Mingeon2ORCID, Sim Yura2, Jung Young Kul3ORCID, Kim Tae Hyung3ORCID, Kim Younghoon2ORCID
Affiliation:
1. BK21 Education and Research Center for Artificial Intelligence in Healthcare, Department of Applied Artificial Intelligence, Hanyang University, Ansan 15588, Republic of Korea 2. Department of Applied Artificial Intelligence, Major in Bio Artificial Intelligence, Hanyang University, Ansan 15588, Republic of Korea 3. Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University Ansan Hospital, Ansan 15355, Republic of Korea
Abstract
Deep learning-based segmentation models have made a profound impact on medical procedures, with U-Net based computed tomography (CT) segmentation models exhibiting remarkable performance. Yet, even with these advances, these models are found to be vulnerable to adversarial attacks, a problem that equally affects automatic CT segmentation models. Conventional adversarial attacks typically rely on adding noise or perturbations, leading to a compromise between the success rate of the attack and its perceptibility. In this study, we challenge this paradigm and introduce a novel generation of adversarial attacks aimed at deceiving both the target segmentation model and medical practitioners. Our approach aims to deceive a target model by altering the texture statistics of an organ while retaining its shape. We employ a real-time style transfer method, known as the texture reformer, which uses adaptive instance normalization (AdaIN) to change the statistics of an image’s feature.To induce transformation, we modify the AdaIN, which typically aligns the source and target image statistics. Through rigorous experiments, we demonstrate the effectiveness of our approach. Our adversarial samples successfully pass as realistic in blind tests conducted with physicians, surpassing the effectiveness of contemporary techniques. This innovative methodology not only offers a robust tool for benchmarking and validating automated CT segmentation systems but also serves as a potent mechanism for data augmentation, thereby enhancing model generalization. This dual capability significantly bolsters advancements in the field of deep learning-based medical and healthcare segmentation models.
Funder
Ministry of Trade, Industry and Energy Korea government Ministry of Education
Reference31 articles.
1. One-pass multi-task networks with cross-task guided attention for brain tumor segmentation;Zhou;IEEE Trans. Image Process.,2020 2. Li, S., Sui, X., Luo, X., Xu, X., Liu, Y., and Goh, R. (2021). Medical image segmentation using squeeze-and-expansion transformers. arXiv. 3. Calisto, F.M., Nunes, N., and Nascimento, J.C. (October, January 28). BreastScreening: On the use of multi-modality in medical imaging diagnosis. Proceedings of the International Conference on Advanced Visual Interfaces, Ischia, Italy. 4. Curvature-based feature selection with application in classifying electronic health records;Zuo;Technol. Forecast. Soc. Chang.,2021 5. Tang, Y., Yang, D., Li, W., Roth, H.R., Landman, B., Xu, D., Nath, V., and Hatamizadeh, A. (2022, January 18–24). Self-supervised pre-training of swin transformers for 3d medical image analysis. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LO, USA.
|
|