Author:
Wang Wei,Tian Yun,Xu Yang,Zhang Xiao-Xuan,Li Yan-Song,Zhao Shi-Feng,Bai Yan-Hua
Abstract
Abstract
Background
Cervical cancer cell detection is an essential means of cervical cancer screening. However, for thin-prep cytology test (TCT)-based images, the detection accuracies of traditional computer-aided detection algorithms are typically low due to the overlapping of cells with blurred cytoplasmic boundaries. Some typical deep learning-based detection methods, e.g., ResNets and Inception-V3, are not always efficient for cervical images due to the differences between cervical cancer cell images and natural images. As a result, these traditional networks are difficult to directly apply to the clinical practice of cervical cancer screening.
Method
We propose a cervical cancer cell detection network (3cDe-Net) based on an improved backbone network and multiscale feature fusion; the proposed network consists of the backbone network and a detection head. In the backbone network, a dilated convolution and a group convolution are introduced to improve the resolution and expression ability of the model. In the detection head, multiscale features are obtained based on a feature pyramid fusion network to ensure the accurate capture of small cells; then, based on the Faster region-based convolutional neural network (R-CNN), adaptive cervical cancer cell anchors are generated via unsupervised clustering. Furthermore, a new balanced L1-based loss function is defined, which reduces the unbalanced sample contribution loss.
Result
Baselines including ResNet-50, ResNet-101, Inception-v3, ResNet-152 and the feature concatenation network are used on two different datasets (the Data-T and Herlev datasets), and the final quantitative results show the effectiveness of the proposed dilated convolution ResNet (DC-ResNet) backbone network. Furthermore, experiments conducted on both datasets show that the proposed 3cDe-Net, based on the optimal anchors, the defined new loss function, and DC-ResNet, outperforms existing methods and achieves a mean average precision (mAP) of 50.4%. By performing a horizontal comparison of the cells on an image, the category and location information of cancer cells can be obtained concurrently.
Conclusion
The proposed 3cDe-Net can detect cancer cells and their locations on multicell pictures. The model directly processes and analyses samples at the picture level rather than at the cellular level, which is more efficient. In clinical settings, the mechanical workloads of doctors can be reduced, and their focus can be placed on higher-level review work.
Funder
National Natural Science Foundation of China
Publisher
Springer Science and Business Media LLC
Subject
Radiology, Nuclear Medicine and imaging
Reference27 articles.
1. Bray F, Ferlay J, Soerjomataram I, et al. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: Cancer J Clin. 2018;68(6):394–424.
2. Kurman RJ. The Bethesda system for reporting cervical/vaginal cytologic diagnoses: definitions, criteria, and explanatory notes for terminology and specimen adequacy. Springer, Berlin; 2012.
3. Jangam E, Barreto AAD, Annavarapu CSR. Automatic detection of COVID-19 from chest CT scan and chest X-rays images using deep learning, transfer learning and stacking. Appl Intell. 2022;52:2243–59.
4. Chute DJ, Lim H, Kong CS. BD focalpoint slide profiler performance with atypical glandular cells on SurePath Papanicolaou smears. Cancer Cytopathol. 2010;118(2):68–74.
5. Bengtsson E, Malm P. Screening for cervical cancer using automated analysis of PAP-smears. Comput Math Methods Med. 2014.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献