Affiliation:
1. Purdue University, USA
Abstract
This chapter provides a comprehensive introduction to a self-adaptive ReLU neural network method proposed. The purpose is to design a nearly minimal neural network architecture to achieve the prescribed accuracy for a given task in scientific machine learning such as approximating a function or a solution of partial differential equation. Starting with a small one hidden-layer neural network, the method enhances the network adaptively by adding neurons in the current or new hidden-layer based on accuracy of the current approximation. In addition, the method provides a natural process for obtaining a good initialization in training the current network. Moreover, initialization of newly added neurons at each adaptive step is discussed in detail.