Abstract
The levellised cost of energy of wave energy converters (WECs) is not competitive with fossil fuel-powered stations yet. To improve the feasibility of wave energy, it is necessary to develop effective control strategies that maximise energy absorption in mild sea states, whilst limiting motions in high waves. Due to their model-based nature, state-of-the-art control schemes struggle to deal with model uncertainties, adapt to changes in the system dynamics with time, and provide real-time centralised control for large arrays of WECs. Here, an alternative solution is introduced to address these challenges, applying deep reinforcement learning (DRL) to the control of WECs for the first time. A DRL agent is initialised from data collected in multiple sea states under linear model predictive control in a linear simulation environment. The agent outperforms model predictive control for high wave heights and periods, but suffers close to the resonant period of the WEC. The computational cost at deployment time of DRL is also much lower by diverting the computational effort from deployment time to training. This provides confidence in the application of DRL to large arrays of WECs, enabling economies of scale. Additionally, model-free reinforcement learning can autonomously adapt to changes in the system dynamics, enabling fault-tolerant control.
Subject
Ocean Engineering,Water Science and Technology,Civil and Structural Engineering
Reference45 articles.
1. Wave Energy: Technology Brief 4;Kempener,2014
2. Control Requirements for Wave Energy Converters Landscaping Study: Final Report,2016
3. Strategic Research and Innovation Agenda for Ocean Energy;Luis Villate,2020
4. Optimal control, MPC and MPC-like algorithms for wave energy systems: An overview
5. Model predictive control of sea wave energy converters – Part I: A convex approach for the case of a single device
Cited by
44 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献