Affiliation:
1. Chengdu University of Technology, Chengdu, Sichuan, China
Abstract
Due to the rapid development of hardware devices, the analytical processing and algorithmic capabilities of computers are also being enhanced, which makes machine learning play an increasingly important role in the field of quantitative investment. For this reason, the possibility of replacing traditional human traders with automated investment algorithms that have been trained several times has become a hot topic in recent years. The majority of machine algorithms used in today’s stock trading market are supervised learning algorithms, which are still unable to objectively analyse the market and find the optimal solution for market trading on their own. To solve the two major challenges of environment awareness and automated decision-making, this study uses three core algorithms, PPO, A2C, and SAC, to build a set of ensemble automated trading strategies in a deep reinforcement learning-based framework. The ensemble trading strategy combines the advantages of each of the three algorithms to make the original reinforcement learning algorithm more adaptive, and to avoid consuming a large amount of memory when training the network, the study uses the PCA method to compress the dimension of the stock feature vector. We test our algorithm on 40 A-share stocks with sufficient liquidity and compare it with different trading strategies. The results show that the ensemble strategy proposed in this study outperforms three independent algorithms and two selected baselines, achieving an accumulated return of around 70%.
Subject
Computer Science Applications,Software
Reference39 articles.
1. Adaptive Quantitative Trading: An Imitative Deep Reinforcement Learning Approach
2. Quantitative Trading
3. An empirical comparison of supervised learning algorithms;R. Caruana
4. Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks;D. H. Lee;Workshop on challenges in representation learning ICML,2013
5. Intelligent problem-solving as integrated hierarchical reinforcement learning
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献