Affiliation:
1. California Institute of Technology, Pasadena, California 91125
Abstract
We propose a framework for developing wall models for large-eddy simulation that is able to capture pressure-gradient effects using multi-agent reinforcement learning. Within this framework, the distributed reinforcement learning agents receive off-wall environmental states, including pressure gradient and turbulence strain rate, ensuring adaptability to a wide range of flows characterized by pressure-gradient effects and separations. Based on these states, the agents determine an action to adjust the wall eddy viscosity and, consequently, the wall-shear stress. The model training is in situ with wall-modeled large-eddy simulation grid resolutions and does not rely on the instantaneous velocity fields from high-fidelity simulations. Throughout the training, the agents compute rewards from the relative error in the estimated wall-shear stress, which allows them to refine an optimal control policy that minimizes prediction errors. Employing this framework, wall models are trained for two distinct subgrid-scale models using low-Reynolds-number flow over periodic hills. These models are validated through simulations of flows over periodic hills at higher Reynolds numbers and flows over the Boeing Gaussian bump. The developed wall models successfully capture the acceleration and deceleration of wall-bounded turbulent flows under pressure gradients and outperform the equilibrium wall model in predicting skin friction.
Funder
the Stanford University Center for Turbulence Research Summer Program
Division of Chemical, Bioengineering, Environmental, and Transport Systems
Publisher
American Institute of Aeronautics and Astronautics (AIAA)