Affiliation:
1. Faculty of Electrical Engineering K. N. Toosi University of Technology Tehran Iran
2. Faculty of Mechatronics Engineering K. N. Toosi University of Technology Tehran Iran
3. Mapna Electric & Control Engineering & Manufacturing (MECO) Karaj Iran
4. Faculty of Mechanical Engineering K. N. Toosi University of Technology Tehran Iran
Abstract
AbstractIn modern industrial systems, diagnosing faults in time and using the best methods becomes increasingly crucial. It is possible to fail a system or to waste resources if faults are not detected or are detected late. Machine learning and deep learning (DL) have proposed various methods for data‐based fault diagnosis, and the authors are looking for the most reliable and practical ones. A framework based on DL and reinforcement learning (RL) is developed for fault detection. The authors have utilised two algorithms in their work: Q‐Learning and Soft Q‐Learning. Reinforcement learning frameworks frequently include efficient algorithms for policy updates, including Q‐learning. These algorithms optimise the policy based on the predictions and rewards, resulting in more efficient updates and quicker convergence. The authors can increase accuracy, overcome data imbalance, and better predict future defects by updating the RL policy when new data is received. By applying their method, an increase of 3%–4% in all evaluation metrics by updating policy, an improvement in prediction speed, and an increase of 3%–6% in all evaluation metrics compared to a typical backpropagation multi‐layer neural network prediction with comparable parameters is observed. In addition, the Soft Q‐learning algorithm yields better outcomes compared to Q‐learning.
Publisher
Institution of Engineering and Technology (IET)
Subject
Artificial Intelligence,Industrial and Manufacturing Engineering,Computer Science Applications,Hardware and Architecture
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献