Author:
Zhong Yutong,Piao Yan,Zhang Guohui
Abstract
Abstract
Object. Breast density is an important indicator of breast cancer risk. However, existing methods for breast density classification do not fully utilise the multi-view information produced by mammography and thus have limited classification accuracy. Method. In this paper, we propose a multi-view fusion network, denoted local-global dynamic pyramidal-convolution transformer network (LG-DPTNet), for breast density classification in mammography. First, for single-view feature extraction, we develop a dynamic pyramid convolutional network to enable the network to adaptively learn global and local features. Second, we address the problem exhibited by traditional multi-view fusion methods, this is based on a cross-transformer that integrates fine-grained information and global contextual information from different views and thereby provides accurate predictions for the network. Finally, we use an asymmetric focal loss function instead of traditional cross-entropy loss during network training to solve the problem of class imbalance in public datasets, thereby further improving the performance of the model. Results. We evaluated the effectiveness of our method on two publicly available mammography datasets, CBIS-DDSM and INbreast, and achieved areas under the curve (AUC) of 96.73% and 91.12%, respectively. Conclusion. Our experiments demonstrated that the devised fusion model can more effectively utilise the information contained in multiple views than existing models and exhibits classification performance that is superior to that of baseline and state-of-the-art methods.
Funder
Education Department of Jilin Province
National Natural Science Foundation of China
Jilin Province Science and Technology Plan Project
Subject
Radiology, Nuclear Medicine and imaging,Radiological and Ultrasound Technology