Affiliation:
1. ERZİNCAN ÜNİVERSİTESİ, MÜHENDİSLİK FAKÜLTESİ
2. FIRAT UNIVERSITY
Abstract
Deep learning, which has seen frequent use in recent studies, has helped solve the problem of classifying objects of many different types and properties. Most studies both create and train a convolutional neural network (CNN) from scratch. The time spent training the network is thus wasted. Transfer learning (TL) is used both to prevent the loss of time due to training the dataset and to more effectively classify small datasets. This study performs classification using a dataset containing eighteen types of fastener. Our study contains three different TL scenarios. Two of them use TL with fine-tuning (FT), while the third does so with feature extraction (FE). The study compares the classification performance of eighteen different pre-trained network models (i.e., one or more versions of EfficientNet, DenseNet, InceptionResNetV2, InceptionV3, MobileNet, ResNet50, Xception, and VGGNet) in detail. When compared to other research in the literature, our first and second scenarios provide excellent implementations of TL-FT, while our third scenario, TL-FE, is hybrid and produces better results than the other two. Furthermore, our findings are superior to those of most previous studies. The models with the best results are DenseNet169 with an accuracy of 0.97 in the TL-FT1 scenario, EfficientNetB0 with 0.96 in TL-FT2, and DenseNet169 with 0.995 in TL-FE.