Affiliation:
1. Department of Chemical Engineering McMaster University Hamilton Ontario Canada
2. Corporate Research Sartorius Oakville Ontario Canada
Abstract
AbstractThe present article enables reinforcement learning (RL)‐based controllers for process control applications. Existing instances of RL‐based solutions have significant challenges for online implementation since the training process of an RL agent (controller) presently requires practically impossible number of online interactions between the agent and the environment (process). To address this challenge, we propose an implementable model‐free RL method developed by leveraging industrially implemented model predictive control (MPC) calculations (often designed using a simple linear model identified via step tests). In the first step, MPC calculations are used to pretrain an RL agent that can mimic the MPC performance. Specifically, the MPC calculations are used to pretrain the actor, and the objective function is used to pretrain the critic(s). The pretrained RL agent is then employed within a model‐free RL framework to control the process in a way that initially imitates MPC behavior (thus not compromising process performance and safety), but also continuously learns and improve its performance over the nominal linear MPC. The effectiveness of the proposed approach is illustrated through simulations on a chemical reactor example.
Funder
Natural Sciences and Engineering Research Council of Canada
Subject
General Chemical Engineering,Environmental Engineering,Biotechnology
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献