A Comparative Evaluation of Self-Attention Mechanism with ConvLSTM Model for Global Aerosol Time Series Forecasting
-
Published:2023-04-05
Issue:7
Volume:11
Page:1744
-
ISSN:2227-7390
-
Container-title:Mathematics
-
language:en
-
Short-container-title:Mathematics
Author:
Radivojević Dušan S.1ORCID, Lazović Ivan M.1, Mirkov Nikola S.1ORCID, Ramadani Uzahir R.1ORCID, Nikezić Dušan P.1ORCID
Affiliation:
1. Vinča Institute of Nuclear Sciences-National Institute of the Republic of Serbia, University of Belgrade, 11351 Belgrade, Serbia
Abstract
The attention mechanism in natural language processing and self-attention mechanism in vision transformers improved many deep learning models. An implementation of the self-attention mechanism with the previously developed ConvLSTM sequence-to-one model was done in order to make a comparative evaluation with statistical testing. First, the new ConvLSTM sequence-to-one model with a self-attention mechanism was developed and then the self-attention layer was removed in order to make comparison. The hyperparameters optimization process was conducted by grid search for integer and string type parameters, and with particle swarm optimization for float type parameters. A cross validation technique was used for better evaluating models with a predefined ratio of train-validation-test subsets. Both models with and without a self-attention layer passed defined evaluation criteria that means that models are able to generate the image of the global aerosol thickness and able to find patterns for changes in the time domain. The model obtained by an ablation study on the self-attention layer achieved better outcomes for Root Mean Square Error and Euclidean Distance in regards to developed ConvLSTM-SA model. As part of the statistical test, a Kruskal–Wallis H Test was done since it was determined that the data did not belong to the normal distribution and the obtained results showed that both models, with and without the SA layer, predict similar images with patterns at the pixel level to the original dataset. However, the model without the SA layer was more similar to the original dataset especially in the time domain at the pixel level. Based on the comparative evaluation with statistical testing, it was concluded that the developed ConvLSTM-SA model better predicts without an SA layer.
Funder
Ministry of Education, Science and Technological Development of the Republic of Serbia
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)
Reference22 articles.
1. Satellite data of atmospheric pollution for U.S. air quality applications: Examples of applications, summary of data end-user resources, answers to FAQs, and common mistakes to avoid;Duncan;Atmos. Environ.,2014 2. Aerosol properties and their impacts on surface CCN at the ARM Southern Great Plains site during the 2011 Midlatitude Continental Convective Clouds Experiment;Logan;Adv. Atmos. Sci.,2018 3. Nikezić, D.P., Ramadani, U.R., Radivojević, D.S., Lazović, I.M., and Mirkov, N.S. (2022). Deep Learning Model for Global Spatio-Temporal Image Prediction. Mathematics, 10. 4. Wangperawong, A. (2019). Attending to Mathematical Language with Transformers. arXiv. 5. Vaswani, A., Bengio, S., Brevdo, E., Chollet, F., Gomez, A.N., Gouws, S., and Uszkoreit, J. (2018, January 17–21). Tensor2Tensor for Neural Machine Translation, AMTA. Proceedings of the 13th Conference of the Association for Machine Translation in the Americas, Association for Machine Translation in the Americas, Boston, MA, USA.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|