Abstract
AbstractEarly and accurate prediction of COVID-19 based on medical images can speed up the diagnostic process and thereby mitigating the disease spread. Hence, developing AI-based models is an inevitable endeavor. The presented work, to our knowledge, is considered the largest empirical COVID-19 classification study using CT and X-ray in which we propose a novel computational framework constructing 10000 deep transfer learning (DTL) models as follows. First, we downloaded and processed 4481 CT and X-ray images pertaining to COVID-19 and non-COVID-19 patients, obtained from the Kaggle repository. Second, we provide processed images as input to four pre-trained deep learning models on more than million images from ImageNet database in which we froze the convolutional and pooling layers pertaining to the feature extraction part while unfroze and train the densely connected classifier with Adam optimizer. Third, we generate and take majority vote of 2, 3, and 4-combinations from the 4 DTL models, resulted inmodels. Then, we combine the 11 DTL models, followed by consecutively generating and taking the majority vote ofDTL models. Finally, we select 7953 DTL models from. Experimental results on the whole datasets using five-fold cross-validation demonstrate that best generated DTL model, named HC, achieving the best AUC of 0.909 when applied to the CT dataset while ConvNeXt yielded a higher marginal AUC of 0.933 compared to 0.93 for HX when considering the X-ray dataset. These promising results set the foundation for promoting large generation of models (LGM) in AI.
Publisher
Cold Spring Harbor Laboratory