Abstract
Abstract
The existence of optimal neural networks, represented as combinations of piecewise functions, is proven by the universal approximation theorem. However, deriving this optimal solution from the training parameters of neural networks remains a challenging problem. This study proposes a novel strategy to construct an approximator for an arbitrary function, starting with a presumed optimal piecewise solution. The proposed approximation employs the anti-derivatives of a Fourier series expansion for the presumed piecewise function, leading to a remarkable feature that enables the simultaneous approximation of an arbitrary function and its anti-derivatives. Systematic experiments have demonstrated the outstanding merits of the proposed anti-derivatives-based approximator, such as the ability to solve differential equations and to enhance the capabilities of neural networks. Furthermore, the anti-derivatives approximator allows for the optimization of activation profiles within neural networks. This feature introduces a novel approach for finding unconventional activation profiles specialized for a given dataset.
Publisher
Research Square Platform LLC
Reference45 articles.
1. Approximation capabilities of multilayer feedforward networks;Hornik K;Neural Networks,1991
2. Approximation theory of the MLP model in neural networks;Pinkus A;Acta Numerica,1999
3. Kidger, P. & Lyons, T. Universal approximation with deep narrow networks. In Conference on Learning Theory (pp. 2306–2327). PMLR (2020).
4. ReLU networks are universal approximators via piecewise linear or constant functions;Huang C;Neural Computation,2020
5. Piecewise linear neural networks and deep learning;Tao Q;Nature Reviews Methods Primers,2022