Abstract
Abstract
This paper proposes a novel model-free reinforcement learning (RL) for vibration control of a magnetorheological elastomer (MRE)-based application. Because the modeling of the MRE stiffness is nonlinear and time-varying depending on various environmental factors, this paper approached the MRE control issue via a model-free learning based method. In this study, an RL model is designed for the MRE-based tunable vibration absorber (TVA) which can control the optimal stiffness of MRE to maximize the vibration suppression. The designed RL algorithm continuously learns and updates the optimal control input of the MRE stiffness adaptively to the dynamic environment without using any prior knowledge of the MRE modeling. From analyzing the mechanism of MRE TVA, the RL algorithm and parameters are carefully designed for a high vibration performance. Also, this study proposed several ideas to make the RL model simpler and converge at a high rate. The experiments confirmed that the proposed RL model showed a rapid convergence to the optimal policy which could minimize the vibration level with respect to the dynamic excitation disturbance. Results showed that the RL model had a similar performance as the conventional tuning method and suppressed the vibration level as much as 57% compared to the one without the controller. Also, the proposed RL algorithm was able to estimate the actual dynamics of the MRE TVA by learning from the environment. Thus, this study showed the feasibility of implementing a model-free RL model to realize an adaptive controller for applications based on highly nonlinear MRE.
Funder
National Research Foundation of Korea
Korea Hydro & Nuclear Power company
Subject
Electrical and Electronic Engineering,Mechanics of Materials,Condensed Matter Physics,General Materials Science,Atomic and Molecular Physics, and Optics,Civil and Structural Engineering,Signal Processing
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献