Publisher
University of Tennessee at Chattanooga
Place of Publication
Chattanooga (Tenn.)
Abstract
Subgrid-scale (SGS) modeling in Large Eddy Simulation (LES) has gained significant attention in recent years due to its critical role in determining the accuracy and realism of LES simulations. The SGS model is essential in providing an estimate of the impact of turbulence that is not resolved on the resolved scales, enabling the simulation to more accurately predict turbulence-induced mixing and transport of heat, mass, and momentum. SGS modeling also enhances simulation stability and robustness by preventing the buildup of numerical errors. Reinforcement learning (RL) has been a promising approach for SGS modeling in LES in recent years. The essential advantage of using RL is its ability to learn from a relatively small amount of high-fidelity data or low-fidelity models, addressing the challenge of limited data availability. This work investigates RL’s advantages and potential use in SGS modeling in LES.
Document Type
posters
Language
English
Rights
http://rightsstatements.org/vocab/InC/1.0/
License
http://creativecommons.org/licenses/by/4.0/
Recommended Citation
Amiri, Amin and Ranjan, Reetesh, "Investigation of Reinforcement Learning in Subgrid-Scale Modeling in Large Eddy Simulation". ReSEARCH Dialogues Conference proceedings. https://scholar.utc.edu/research-dialogues/2023/proceedings/13.
Investigation of Reinforcement Learning in Subgrid-Scale Modeling in Large Eddy Simulation
Subgrid-scale (SGS) modeling in Large Eddy Simulation (LES) has gained significant attention in recent years due to its critical role in determining the accuracy and realism of LES simulations. The SGS model is essential in providing an estimate of the impact of turbulence that is not resolved on the resolved scales, enabling the simulation to more accurately predict turbulence-induced mixing and transport of heat, mass, and momentum. SGS modeling also enhances simulation stability and robustness by preventing the buildup of numerical errors. Reinforcement learning (RL) has been a promising approach for SGS modeling in LES in recent years. The essential advantage of using RL is its ability to learn from a relatively small amount of high-fidelity data or low-fidelity models, addressing the challenge of limited data availability. This work investigates RL’s advantages and potential use in SGS modeling in LES.