Long-Term Prediction of Crack Growth Using Deep Recurrent Neural Networks and Nonlinear Regression: A Comparison Study
-
Published:2022-10-18
Issue:20
Volume:12
Page:10514
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Iqbal Salahuddin Muhammad,
Park Jun-Ryeol,
Jung Kyu-IlORCID,
Lee Jun-Seoung,
Kang Dae-KiORCID
Abstract
Cracks in a building can potentially result in financial and life losses. Thus, it is essential to predict when the crack growth is reaching a certain threshold, to prevent possible disaster. However, long-term prediction of the crack growth in newly built facilities or existing facilities with recently installed sensors is challenging because only the short-term crack sensor data are usually available in the aforementioned facilities. In contrast, we need to obtain equivalently long or longer crack sensor data to make an accurate long-term prediction. Against this background, this research aims to make a reasonable long-term estimation of crack growth within facilities that have crack sensor data with limited length. We show that deep recurrent neural networks such as LSTM suffer when the prediction’s interval is longer than the observed data points. We also observe a limitation of simple linear regression if there are abrupt changes in a dataset. We conclude that segmented nonlinear regression is suitable for this problem because of its advantage in splitting the data series into multiple segments, with the premise that there are sudden transitions in data.
Funder
National Research Foundation of Korea
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference19 articles.
1. Learning Internal Representations by Error Propagation;Rumelhart,1986
2. Attractor Dynamics and Parallelism in a Connectionist Sequential Machine;Jordan,1990
3. A Focused Backpropagation Algorithm for Temporal Pattern Recognition;Mozer,1995
4. Learning long-term dependencies with gradient descent is difficult
5. Gradient flow in recurrent nets: The difficulty of learning long-term dependencies;Hochreiter,2001