Recognition and 3D Visualization of Human Body Parts and Bone Areas Using CT Images
Author:
Nguyen Hai Thanh1ORCID, Nguyen My N.12, Nguyen Bang Anh1, Nguyen Linh Chi1, Phung Linh Duong3
Affiliation:
1. 1 College of Information and Communication Technology , Can Tho University , Can Tho , Vietnam 2. 2 Kyoto Institute of Technology , Kyoto , Japan 3. 3 College of Information Science and Engineering , Ritsumeikan University , Kyoto , Japan
Abstract
Abstract
The advent of medical imaging significantly assisted in disease diagnosis and treatment. This study introduces to a framework for detecting several human body parts in Computerised Tomography (CT) images formatted in DICOM files. In addition, the method can highlight the bone areas inside CT images and transform 2D slices into a visual 3D model to illustrate the structure of human body parts. Firstly, we leveraged shallow convolutional Neural Networks to classify body parts and detect bone areas in each part. Then, Grad-CAM was applied to highlight the bone areas. Finally, Insight and Visualization libraries were utilized to visualize slides in 3D of a body part. As a result, the classifiers achieved 98 % in F1-score in the classification of human body parts on a CT image dataset, including 1234 slides capturing body parts from a woman for the training phase and 1245 images from a male for testing. In addition, distinguishing between bone and non-bone images can reach 97 % in F1-score on the dataset generated by setting a threshold value to reveal bone areas in CT images. Moreover, the Grad-CAM-based approach can provide clear, accurate visualizations with segmented bones in the image. Also, we successfully converted 2D slice images of a body part into a lively 3D model that provided a more intuitive view from any angle. The proposed approach is expected to provide an interesting visual tool for supporting doctors in medical image-based disease diagnosis.
Publisher
Walter de Gruyter GmbH
Reference41 articles.
1. X. Wang, J. Yu, Q. Zhu, S. Li, Z. Zhao, B. Yang, and J. Pu, “Potential of deep learning in assessing pneumoconiosis depicted on digital chest radiography,” Occupational and Environmental Medicine, vol. 77, no. 9, pp. 597–602, 2020. https://doi.org/10.1136/oemed-2019-106386 2. H. Tang and Z. Hu, “Research on medical image classification based on machine learning,” IEEE Access, vol. 8, pp. 93145–93154, 2020. https://doi.org/10.1109/ACCESS.2020.2993887 3. A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp. 115–118, Jan. 2017. https://doi.org/10.1038/nature21056 4. L. Song, T. Xing, Z. Zhu, W. Han, G. Fan, J. Li, H. Du, W. Song, Z. Jin, and G. Zhang, “Hybrid clinical-radiomics model for precisely predicting the invasiveness of lung adenocarcinoma manifesting as pure ground-glass nodule,” Academic Radiology, vol. 28, no. 9, Sep. 2021. https://doi.org/10.1016/j.acra.2020.05.004 5. L. Ibanez, W. Schroeder, L. Ng, and J. Cates, The ITK Software Guide and the Insight Software Consortium: updated for ITK version 2.4. Erscheinungsort nicht ermittelbar: Kitware Inc, 2005. https://www.igb.illinois.edu/sites/default/files/upload/core/PDF/ItkSoftwareGuide-2.4.0.pdf
|
|