Affiliation:
1. PES University, Bangalore, India
2. Centre for Artificial Intelligence and Machine Learning, Institute for Development and Research in Banking Technology, Castle Hills Road #1, Masab Tank, Hyderabad 500076, India
Abstract
Literature abounds with various statistical and machine learning techniques for stock market forecasting. However, Reinforcement Learning (RL) is conspicuous by its absence in this field and is little explored despite its potential to address the dynamic and uncertain nature of the stock market. In a first-of-its-kind study, this research precisely bridges this gap, by forecasting stock prices using RL, in the static as well as streaming contexts using deep RL techniques. In the static context, we employed three deep RL algorithms for forecasting the stock prices: Deep Deterministic Policy Gradient (DDPG), Proximal Policy Optimisation (PPO) and Recurrent Deterministic Policy Gradient (RDPG) and compared their performance with Multi-Layer Perceptron (MLP), Support Vector Regression (SVR) and General Regression Neural Network (GRNN). In addition, we proposed a generic streaming analytics-based forecasting approach leveraging the real-time processing capabilities of Spark streaming for all six methods. This approach employs a sliding window technique for real-time forecasting or nowcasting using the above-mentioned algorithms. We demonstrated the effectiveness of the proposed approach on the daily closing prices of four different financial time series dataset as well as the Mackey–Glass time series, a benchmark chaotic time series dataset. We evaluated the performance of these methods using three metrics: Symmetric Mean Absolute Percentage (SMAPE), Directional Symmetry statistic (DS) and Theil’s U Coefficient. The results are promising for DDPG in the static context and GRNN turned out to be the best in streaming context. We performed the Diebold–Mariano (DM) test to assess the statistical significance of the best-performing models.
Publisher
World Scientific Pub Co Pte Ltd