Author:
Franco-Garcia Michael,Benasutti Alex,Pearlstein Larry,Alabsi Mohammed
Abstract
Intelligent fault diagnosis utilizing deep learning algorithms has been widely investigated recently. Although previous results demonstrated excellent performance, features learned by Deep Neural Networks (DNN) are part of a large black box. Consequently, lack of understanding of underlying physical meanings embedded within the features can lead to poor performance when applied to different but related datasets i.e. transfer learning applications. This study will investigate the transfer learning performance of a Convolution Neural Network (CNN) considering 4 different operating conditions. Utilizing the Case Western Reserve University (CWRU) bearing dataset, the CNN will be trained to classify 12 classes. Each class represents a unique differentfault scenario with varying severity i.e. inner race fault of 0.007”, 0.014” diameter. Initially, zero load data will be utilized for model training and the model will be tuned until testing accuracy above 99% is obtained. The model performance will be evaluated by feeding vibration data collected when the load is varied to 1, 2 and 3 HP. Initial results indicated that the classification accuracy will degrade substantially. Hence, this paper will visualize convolution kernels in time and frequency domains and will investigate the influence of changing loads on fault characteristics, network classification mechanism and activation strength.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献