Affiliation:
1. School of Aeronautics and Astronautics, Zhejiang University, Zhejiang 310027, China
Abstract
Deep reinforcement learning (RL) is capable of identifying and modifying strategies for active flow control. However, the classic active formulation of deep RL requires lengthy active exploration. This paper describes the introduction of expert demonstration into a classic off-policy RL algorithm, the soft actor-critic algorithm, for application to vortex-induced vibration problems. This combined online-learning framework is applied to an oscillator wake environment and a Navier–Stokes environment with expert demonstration obtained from the pole-placement method and surrogate model optimization. The results show that the soft actor-critic framework combined with expert demonstration enables rapid learning of active flow control strategies through a combination of prior demonstration data and online experience. This study develops a new data-efficient RL approach for discovering active flow control strategies for vortex-induced vibration, providing a more practical methodology for industrial applications.
Funder
Natural Science Foundation of Zhejiang Province
Fundamental Research Funds for the Central Universities
Subject
Condensed Matter Physics,Fluid Flow and Transfer Processes,Mechanics of Materials,Computational Mechanics,Mechanical Engineering
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献