Abstract
Abstract
Previously, convolutional neural networks mostly used deep semantic feature information obtained from several convolutions for image classification. Such deep semantic features have a larger receptive field, and the features extracted are more effective as the number of convolutions increases, which helps in the classification of targets. However, this method tends to lose the shallow local features, such as the spatial connectivity and correlation of tumor region texture and edge contours in breast histopathology images, which leads to its recognition accuracy not being high enough. To address this problem, we propose a multi-level feature fusion method for breast histopathology image classification. First, we fuse shallow features and deep semantic features by attention mechanism and convolutions. Then, a new weighted cross entropy loss function is used to deal with the misjudgment of false negative and false positive. And finally, the correlation of spatial information is used to correct the misjudgment of some patches. We have conducted experiments on our own datasets and compared with the base network Inception-ResNet-v2, which has a high accuracy. The proposed method achieves an accuracy of 99.0% and an AUC of 99.9%.
Funder
the foundations of major weak discipline construction project of pu-dong health and family planning commission of Shanghai
Zhejiang public welfare technology research plan / industrial project
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献