Physics-informed neural networks with residual/gradient-based adaptive sampling methods for solving partial differential equations with sharp solutions
-
Published:2023-07
Issue:7
Volume:44
Page:1069-1084
-
ISSN:0253-4827
-
Container-title:Applied Mathematics and Mechanics
-
language:en
-
Short-container-title:Appl. Math. Mech.-Engl. Ed.
Author:
Mao Zhiping,Meng Xuhui
Abstract
AbstractWe consider solving the forward and inverse partial differential equations (PDEs) which have sharp solutions with physics-informed neural networks (PINNs) in this work. In particular, to better capture the sharpness of the solution, we propose the adaptive sampling methods (ASMs) based on the residual and the gradient of the solution. We first present a residual only-based ASM denoted by ASM I. In this approach, we first train the neural network using a small number of residual points and divide the computational domain into a certain number of sub-domains, then we add new residual points in the sub-domain which has the largest mean absolute value of the residual, and those points which have the largest absolute values of the residual in this sub-domain as new residual points. We further develop a second type of ASM (denoted by ASM II) based on both the residual and the gradient of the solution due to the fact that only the residual may not be able to efficiently capture the sharpness of the solution. The procedure of ASM II is almost the same as that of ASM I, and we add new residual points which have not only large residuals but also large gradients. To demonstrate the effectiveness of the present methods, we use both ASM I and ASM II to solve a number of PDEs, including the Burger equation, the compressible Euler equation, the Poisson equation over an L-shape domain as well as the high-dimensional Poisson equation. It has been shown from the numerical results that the sharp solutions can be well approximated by using either ASM I or ASM II, and both methods deliver much more accurate solutions than the original PINNs with the same number of residual points. Moreover, the ASM II algorithm has better performance in terms of accuracy, efficiency, and stability compared with the ASM I algorithm. This means that the gradient of the solution improves the stability and efficiency of the adaptive sampling procedure as well as the accuracy of the solution. Furthermore, we also employ the similar adaptive sampling technique for the data points of boundary conditions (BCs) if the sharpness of the solution is near the boundary. The result of the L-shape Poisson problem indicates that the present method can significantly improve the efficiency, stability, and accuracy.
Publisher
Springer Science and Business Media LLC
Subject
Applied Mathematics,Mechanical Engineering,Mechanics of Materials
Reference31 articles.
1. LECUN, Y., BENGIO, Y., and HINTON, G. Deep learning. nature, 521(7553), 436–444 (2015) 2. MIKOLOV, T., DEORAS, A., POVEY, D., BURGET, L., and ČERNOCKỲ, J. Strategies for training large scale neural network language models. 2011 IEEE Workshop on Automatic Speech Recognition & Understanding, Hawaii, 196–201 (2011) 3. HINTON, G., DENG, L., YU, D., DAHL, G., MOHAMED, A. R., JAITLY, N., SENIOR, A., VANHOUCKE, V., NGUYEN, P., SAINATH, T. N., and KINGSBURY, B. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 29(6), 82–97 (2012) 4. SAINATH, T. N., MOHAMED, A. R., KINGSBURY, B., and RAMABHADRAN, B. Deep convolutional neural networks for LVCSR. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Canada, 8614–8618 (2013) 5. KRIZHEVSKY, A., SUTSKEVER, I., and HINTON, G. E. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84–90 (2017)
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|