Affiliation:
1. Center for Applied Scientific Computing Lawrence Livermore National Laboratory Livermore California USA
2. Global Security Computing Applications Division Lawrence Livermore National Laboratory Livermore California USA
3. Fariborz Maseeh Department of Mathematics + Statistics Portland State University Portland Oregon USA
Abstract
AbstractA common challenge in regression is that for many problems, the degrees of freedom required for a high‐quality solution also allows for overfitting. Regularization is a class of strategies that seek to restrict the range of possible solutions so as to discourage overfitting while still enabling good solutions, and different regularization strategies impose different types of restrictions. In this paper, we present a multilevel regularization strategy that constructs and trains a hierarchy of neural networks, each of which has layers that are wider versions of the previous network's layers. We draw intuition and techniques from the field of Algebraic Multigrid (AMG), traditionally used for solving linear and nonlinear systems of equations, and specifically adapt the Full Approximation Scheme (FAS) for nonlinear systems of equations to the problem of deep learning. Training through V‐cycles then encourage the neural networks to build a hierarchical understanding of the problem. We refer to this approach as multilevel‐in‐width to distinguish from prior multilevel works which hierarchically alter the depth of neural networks. The resulting approach is a highly flexible framework that can be applied to a variety of layer types, which we demonstrate with both fully connected and convolutional layers. We experimentally show with PDE regression problems that our multilevel training approach is an effective regularizer, improving the generalize performance of the neural networks studied.
Funder
Lawrence Livermore National Laboratory
Subject
Applied Mathematics,Algebra and Number Theory
Reference42 articles.
1. AbadiM AgarwalA BarhamP BrevdoE ChenZ CitroC et al.TensorFlow: Large‐Scale Machine Learning on Heterogeneous Systems. 2015 [cited 2022 Nov 11]. Available from.https://www.tensorflow.org/
2. vonPlantaC KopanicákováA KrauseR.Training of Deep Residual Networks with Stochastic MG/OPT. CoRR. 2021; abs/2108.04052 [cited 2022 Nov 11]. Available from.https://arxiv.org/abs/2108.04052
3. CyrEC GüntherS SchroderJB.Multilevel Initialization for Layer‐Parallel Deep Neural Network Training. arXiv; 2019 [cited 2022 Nov 11]. Available from.https://arxiv.org/abs/1912.08974
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献