Practical Medical Image Generation with Provable Privacy Protection Based on Denoising Diffusion Probabilistic Models for High-Resolution Volumetric Images
-
Published:2024-04-20
Issue:8
Volume:14
Page:3489
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Shibata Hisaichi1ORCID, Hanaoka Shouhei1, Nakao Takahiro2ORCID, Kikuchi Tomohiro23ORCID, Nakamura Yuta2, Nomura Yukihiro24, Yoshikawa Takeharu2, Abe Osamu1ORCID
Affiliation:
1. Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo, Tokyo 113-8655, Japan 2. Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo, Tokyo 113-8655, Japan 3. Department of Radiology, School of Medicine, Jichi Medical University, 3311-1 Yakushiji, Shimotsuke, Tochigi 329-0498, Japan 4. Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage, Chiba 263-8522, Japan
Abstract
Local differential privacy algorithms combined with deep generative models can enhance secure medical image sharing among researchers in the public domain without central administrators; however, these images were limited to the generation of low-resolution images, which are very insufficient for diagnosis by medical doctors. To enhance the performance of deep generative models so that they can generate high-resolution medical images, we propose a large-scale diffusion model that can, for the first time, unconditionally generate high-resolution (256×256×256) volumetric medical images (head magnetic resonance images). This diffusion model has 19 billion parameters, but to make it easy to train it, we temporally divided the model into 200 submodels, each of which has 95 million parameters. Moreover, on the basis of this new diffusion model, we propose another formulation of image anonymization with which the processed images can satisfy provable Gaussian local differential privacy and with which we can generate images semantically different from the original image but belonging to the same class. We believe that the formulation of this new diffusion model and the implementation of local differential privacy algorithms combined with the diffusion models can contribute to the secure sharing of practical images upstream of data processing.
Funder
Japan Science and Technology Agency
Reference29 articles.
1. Dwork, C. (2006, January 10–14). Differential privacy. Proceedings of the International Colloquium on Automata, Languages, and Programming ICALP 2006, Venice, Italy. 2. Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., and Naor, M. (June, January 28). Our data, ourselves: Privacy via distributed noise generation. Proceedings of the Advances in Cryptology-EUROCRYPT 2006: 24th Annual International Conference on the Theory and Applications of Cryptographic Techniques, St. Petersburg, Russia. 3. Denoising diffusion probabilistic models;Ho;Adv. Neural Inf. Process. Syst.,2020 4. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014, January 8–13). Generative adversarial nets. Proceedings of the Advances in Neural Information Processing Systems 2014, Montreal, QC, Canada. 5. Differentially private facial obfuscation via generative adversarial networks;Croft;Future Gener. Comput. Syst.,2022
|
|