Author:
Koutas Daniel,Bismut Elizabeth,Straub Daniel
Abstract
AbstractWe propose a novel Deep Reinforcement Learning (DRL) architecture for sequential decision processes under uncertainty, as encountered in inspection and maintenance (I &M) planning. Unlike other DRL algorithms for (I &M) planning, the proposed +RQN architecture dispenses with computing the belief state and directly handles erroneous observations instead. We apply the algorithm to a basic I &M planning problem for a one-component system subject to deterioration. In addition, we investigate the performance of Monte Carlo tree search for the I &M problem and compare it to the +RQN. The comparison includes a statistical analysis of the two methods’ resulting policies, as well as their visualization in the belief space.
Funder
Technische Universität München
Publisher
Springer Science and Business Media LLC