Affiliation:
1. Munich Institute of Biomedical Engineering and the School of Computation, Information, and Technology Technical University of Munich Munich Germany
2. Institute for History and Ethics in Medicine and Munich School of Technology in Society Technical University of Munich Munich Germany
3. Department of Radiology University Hospital Ludwig‐Maximilians‐Universität Munich Germany
Abstract
AbstractBackgroundDeep learning models are being applied to more and more use cases with astonishing success stories, but how do they perform in the real world? Models are typically tested on specific cleaned data sets, but when deployed in the real world, the model will encounter unexpected, out‐of‐distribution (OOD) data.PurposeTo investigate the impact of OOD radiographs on existing chest x‐ray classification models and to increase their robustness against OOD data.MethodsThe study employed the commonly used chest x‐ray classification model, CheXnet, trained on the chest x‐ray 14 data set, and tested its robustness against OOD data using three public radiography data sets: IRMA, Bone Age, and MURA, and the ImageNet data set. To detect OOD data for multi‐label classification, we proposed in‐distribution voting (IDV). The OOD detection performance is measured across data sets using the area under the receiver operating characteristic curve (AUC) analysis and compared with Mahalanobis‐based OOD detection, MaxLogit, MaxEnergy, self‐supervised OOD detection (SS OOD), and CutMix.ResultsWithout additional OOD detection, the chest x‐ray classifier failed to discard any OOD images, with an AUC of 0.5. The proposed IDV approach trained on ID (chest x‐ray 14) and OOD data (IRMA and ImageNet) achieved, on average, 0.999 OOD AUC across the three data sets, surpassing all other OOD detection methods. Mahalanobis‐based OOD detection achieved an average OOD detection AUC of 0.982. IDV trained solely with a few thousand ImageNet images had an AUC 0.913, which was considerably higher than MaxLogit (0.726), MaxEnergy (0.724), SS OOD (0.476), and CutMix (0.376).ConclusionsThe performance of all tested OOD detection methods did not translate well to radiography data sets, except Mahalanobis‐based OOD detection and the proposed IDV method. Consequently, training solely on ID data led to incorrect classification of OOD images as ID, resulting in increased false positive rates. IDV substantially improved the model's ID classification performance, even when trained with data that will not occur in the intended use case or test set (ImageNet), without additional inference overhead or performance decrease in the target classification. The corresponding code is available at https://gitlab.lrz.de/IP/a‐knee‐cannot‐have‐lung‐disease.
Funder
Bundesministerium für Gesundheit
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献