Abstract
Abstract
Recently, with the advancement of the AI field, reinforcement learning (RL) has increasingly been applied to plasma control on tokamak devices. However, possibly due to the generally high training costs of reinforcement learning based on first-principle physical models and the uncertainty in ensuring simulation results align perfectly with tokamak experiments, feedback control experiments using reinforcement learning specifically for plasma kinetic parameters on tokamaks remain scarce. To address this challenge, this work proposes a novel design scheme including the development of a low computational cost environment. This environment is derived from EAST modulation experiments data through system identification. To tackle issues of noise and actuator limitations encountered in experiments, data preprocessing methods were employed. During training, the agent collected data across multiple plasma scenarios to update its strategy, and the performance of the RL controller was fine-tuned by adjusting the weight of the integral term of the error in the reward function. The effectiveness and robustness of the proposed design were then validated in a simulated environment. Finally, the scheme was successfully implemented on EAST, effectively tracking the β
p
target with lower hybrid wave (LHW) at 4.6 GHz as the actuator, and providing reference for implementing feedback control based on reinforcement learning in tokamaks.
Funder
Comprehensive Research Facility for Fusion Technology Program of China
National Nature Science Foundation of China
provincial and ministerial joint funding for the postdoctoral international exchange program