On the latent dimension of deep autoencoders for reduced order modeling of PDEs parametrized by random fields
-
Published:2024-08-28
Issue:5
Volume:50
Page:
-
ISSN:1019-7168
-
Container-title:Advances in Computational Mathematics
-
language:en
-
Short-container-title:Adv Comput Math
Author:
Franco Nicola RaresORCID, Fraulin Daniel, Manzoni Andrea, Zunino Paolo
Abstract
AbstractDeep Learning is having a remarkable impact on the design of Reduced Order Models (ROMs) for Partial Differential Equations (PDEs), where it is exploited as a powerful tool for tackling complex problems for which classical methods might fail. In this respect, deep autoencoders play a fundamental role, as they provide an extremely flexible tool for reducing the dimensionality of a given problem by leveraging on the nonlinear capabilities of neural networks. Indeed, starting from this paradigm, several successful approaches have already been developed, which are here referred to as Deep Learning-based ROMs (DL-ROMs). Nevertheless, when it comes to stochastic problems parameterized by random fields, the current understanding of DL-ROMs is mostly based on empirical evidence: in fact, their theoretical analysis is currently limited to the case of PDEs depending on a finite number of (deterministic) parameters. The purpose of this work is to extend the existing literature by providing some theoretical insights about the use of DL-ROMs in the presence of stochasticity generated by random fields. In particular, we derive explicit error bounds that can guide domain practitioners when choosing the latent dimension of deep autoencoders. We evaluate the practical usefulness of our theory by means of numerical experiments, showing how our analysis can significantly impact the performance of DL-ROMs.
Funder
H2020 Health Ministero dell’Università e della Ricerca Politecnico di Milano
Publisher
Springer Science and Business Media LLC
Reference67 articles.
1. Nelson, D.M., Pereira, A.C., De Oliveira, R.A.: Stock market’s price movement prediction with lstm neural networks. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 1419–1426. Ieee (2017) 2. Long, W., Lu, Z., Cui, L.: Deep learning-based feature engineering for stock price movement prediction. Knowl.-Based Syst. 164, 163–173 (2019) 3. Horvath, B., Muguruza, A., Tomas, M.: Deep learning volatility: a deep neural network perspective on pricing and calibration in (rough) volatility models. Quant. Finance 21(1), 11–27 (2021) 4. Tandel, G.S., Biswas, M., Kakde, O.G., Tiwari, A., Suri, H.S., Turk, M., Laird, J.R., Asare, C.K., Ankrah, A.A., Khanna, N., et al.: A review on a deep learning perspective in brain cancer classification. Cancers 11(1), 111 (2019) 5. Massi, M.C., Gasperoni, F., Ieva, F., Paganoni, A.M., Zunino, P., Manzoni, A., Franco, N.R., Veldeman, L., Ost, P., Fonteyne, V., et al.: A deep learning approach validates genetic risk factors for late toxicity after prostate cancer radiotherapy in a requite multi-national cohort. Front. Oncol. 10, 541281 (2020)
|
|