Abstract
We developed a self-optimizing decision system that dynamically minimizes the overall energy consumption of an industrial process. Our model is based on a deep reinforcement learning (DRL) framework, adopting three reinforcement learning methods, namely: deep Q-network (DQN), proximal policy optimization (PPO), and advantage actor–critic (A2C) algorithms, combined with a self-predicting random forest model. This smart decision system is a physics-informed DRL that sets the key industrial input parameters to optimize energy consumption while ensuring the product quality based on desired output parameters. The system is self-improving and can increase its performances without further human assistance. We applied the approach to the process of heating tempered glass. Indeed, the identification and control of tempered glass parameters is a challenging task requiring expertise. In addition, optimizing energy consumption while dealing with this issue is of great value-added. The evaluation of the decision system under the three configurations has been performed and consequently, outcomes and conclusions have been explained in this paper. Our intelligent decision system provides an optimized set of parameters for the heating process within the acceptance limits while minimizing overall energy consumption. This work provides the necessary foundations to address energy optimization issues related to process parameterization from theory to practice and providing real industrial application; further research opens a new horizon towards intelligent and sustainable manufacturing.
Subject
Information Systems and Management,Computer Networks and Communications,Modeling and Simulation,Control and Systems Engineering,Software
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献