Author:
Zhu Haihua,Gui Yong,Xu Hui,Tao Shuai,Zheng Kun
Abstract
Abstract
At present, manufacturing models are characterized by multi-variety, small batch, and diversification. It is insufficient to use traditional scheduling methods for production management with high performance. A real-time production scheduling system based on reinforcement learning (RL) is suggested in an effort to address the aforementioned issues. A brand-new manufacturing neural network is created to learn the state-action values for production scheduling in real time using high-dimensional data as the input. The detailed setup of network inputs, neural network, action, and reward are also designed. Then, a policy-based reinforcement learning algorithm is proposed to achieve the optimum objective. Finally, By contrasting the proposed scheduling strategy with rule-based approaches in a smart manufacturing environment, its efficacy is demonstrated. according to experimental data, the suggested algorithm can successfully improve performance in the dynamic job-shop environment.
Subject
General Physics and Astronomy
Reference5 articles.
1. A survey of dynamic scheduling in manufacturing systems;Ouelhadj;Journal of Scheduling,2009
2. Loading and control policies for a flexible manufacturing system;Stecke;International Journal of Production Research,2016
3. Improved contract net protocol for manufacturing tasks dynamic assignment;Shi-Jin;Computer Integrated Manufacturing Systems,2011
4. Collaborative reinforcement learning for a two-robot job transfer flow-shop scheduling problem;Arviv;International Journal of Production Research,2016
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献