Affiliation:
1. School of Engineering, Deakin University, Waurn Ponds, Australia
Abstract
Abstract
Traffic state estimation is an essential component of Intelligent Transportation System (ITS) designed for alleviating traffic congestions. As traffic data is composed of intricate information which can also be impacted by various factors, scholars have been attempting to utilize state-of-the-art deep learning forecasting models in recent years. However, a more complex and robust model is required to extract long-range correlations with large-scale traffic data sequences. In order to overcome the weaknesses of deep learning models, the superior performance of transformers is expected to address this effectively in time-series forecasting with transport data. Employing the capabilities of transformers in extracting long-term trends and dynamic dependencies, proposed model improves the deep learning prediction performance for real datasets. The findings indicate that the transformer-based model exhibited promising performance in forecasting long-term traffic patterns and characteristics with a large quantity of data. In this paper, a comparison across conventional hybrid deep learning models with the Spatio-Temporal Autoencoder Transformer (STAT) model was conducted using real-world datasets. The multi-head attention-based transformer model outperformed all other comparative approaches for large-scale data demonstrating its importance in measuring the error criteria.
Publisher
Research Square Platform LLC