Affiliation:
1. Department of Computing, Faculty of Engineering, Imperial College London, London SW7 2AZ, UK
Abstract
We present a hierarchical reinforcement learning (RL) architecture that employs various low-level agents to act in the trading environment, i.e., the market. The highest-level agent selects from among a group of specialized agents, and then the selected agent decides when to sell or buy a single asset for a period of time. This period can be variable according to a termination function. We hypothesized that, due to different market regimes, more than one single agent is needed when trying to learn from such heterogeneous data, and instead, multiple agents will perform better, with each one specializing in a subset of the data. We use k-meansclustering to partition the data and train each agent with a different cluster. Partitioning the input data also helps model-based RL (MBRL), where models can be heterogeneous. We also add two simple decision-making models to the set of low-level agents, diversifying the pool of available agents, and thus increasing overall behavioral flexibility. We perform multiple experiments showing the strengths of a hierarchical approach and test various prediction models at both levels. We also use a risk-based reward at the high level, which transforms the overall problem into a risk-return optimization. This type of reward shows a significant reduction in risk while minimally reducing profits. Overall, the hierarchical approach shows significant promise, especially when the pool of low-level agents is highly diverse. The usefulness of such a system is clear, especially for human-devised strategies, which could be incorporated in a sound manner into larger, powerful automatic systems.
Funder
EPSRC Centre for Doctoral Training in High Performance Embedded and Distributed Systems
Reference37 articles.
1. Human-level control through deep reinforcement learning;Mnih;Nature,2015
2. Li, Y. (2017). Deep reinforcement learning: An overview. arXiv.
3. Millea, A. (2021). Deep reinforcement learning for trading—A critical survey. Data, 6.
4. Pricope, T.V. (2021). Deep reinforcement learning in quantitative algorithmic trading: A review. arXiv.
5. Deep reinforcement learning for trading;Zhang;J. Financ. Data Sci.,2020
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. HIT: Solving Partial Index Tracking via Hierarchical Reinforcement Learning;2024 IEEE 40th International Conference on Data Engineering (ICDE);2024-05-13