Affiliation:
1. School of Biomedical Engineering Capital Medical University Beijing China
2. Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application Capital Medical University Beijing China
3. Department of Radiology Huaihe Hospital, Henan University Kaifeng China
4. Department of Radiology and Nuclear Medicine Xuanwu Hospital Capital Medical University Beijing China
Abstract
AbstractBackgroundBreast cancer is one of the most prevalent malignancies diagnosed in women. Mammogram inspection in the search and delineation of breast tumors is an essential prerequisite for a reliable diagnosis. However, analyzing mammograms by radiologists is time‐consuming and prone to errors. Therefore, the development of computer‐aided diagnostic (CAD) systems to automate the mass segmentation procedure is greatly expected.PurposeAccurate breast mass segmentation in mammograms remains challenging in CAD systems due to the low contrast, various shapes, and fuzzy boundaries of masses. In this paper, we propose a fully automatic and effective mass segmentation model based on deep learning for improving segmentation performance.MethodsWe propose an effective transformer‐based encoder‐decoder model (TrEnD). Firstly, we introduce a lightweight method for adaptive patch embedding (APE) of the transformer, which utilizes superpixels to adaptively adjust the size and position of each patch. Secondly, we introduce a hierarchical transformer‐encoder and attention‐gated‐decoder structure, which is beneficial for progressively suppressing interference feature activations in irrelevant background areas. Thirdly, a dual‐branch design is employed to extract and fuse globally coarse and locally fine features in parallel, which could capture the global contextual information and ensure the relevance and integrity of local information. The model is evaluated on two public datasets CBIS‐DDSM and INbreast. To further demonstrate the robustness of TrEnD, different cropping strategies are applied to these datasets, termed tight, loose, maximal, and mix‐frame. Finally, ablation analysis is performed to assess the individual contribution of each module to the model performance.ResultsThe proposed segmentation model provides a high Dice coefficient and Intersection over Union (IoU) of 92.20% and 85.81% on the mix‐frame CBIS‐DDSM, while 91.83% and 85.29% for the mix‐frame INbreast, respectively. The segmentation performance outperforms the current state‐of‐the‐art approaches. By adding the APE and attention‐gated module, the Dice and IoU have improved by 6.54% and 10.07%.ConclusionAccording to extensive qualitative and quantitative assessments, the proposed network is effective for automatic breast mass segmentation, and has adequate potential to offer technical assistance for subsequent clinical diagnoses.
Funder
National Natural Science Foundation of China
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献