Disentangling feature and lazy training in deep neural networks

Author:

Geiger Mario,Spigler Stefano,Jacot Arthur,Wyart Matthieu

Abstract

Abstract Two distinct limits for deep learning have been derived as the network width h → ∞, depending on how the weights of the last layer scale with h. In the neural tangent Kernel (NTK) limit, the dynamics becomes linear in the weights and is described by a frozen kernel Θ (the NTK). By contrast, in the mean-field limit, the dynamics can be expressed in terms of the distribution of the parameters associated with a neuron, that follows a partial differential equation. In this work we consider deep networks where the weights in the last layer scale as αh −1/2 at initialization. By varying α and h, we probe the crossover between the two limits. We observe two the previously identified regimes of ‘lazy training’ and ‘feature training’. In the lazy-training regime, the dynamics is almost linear and the NTK barely changes after initialization. The feature-training regime includes the mean-field formulation as a limiting case and is characterized by a kernel that evolves in time, and thus learns some features. We perform numerical experiments on MNIST, Fashion-MNIST, EMNIST and CIFAR10 and consider various architectures. We find that: (i) the two regimes are separated by an α* that scales as 1 h . (ii) Network architecture and data structure play an important role in determining which regime is better: in our tests, fully-connected networks perform generally better in the lazy-training regime, unlike convolutional networks. (iii) In both regimes, the fluctuations δF induced on the learned function by initial conditions decay as δ F 1 / h , leading to a performance that increases with h. The same improvement can also be obtained at an intermediate width by ensemble-averaging several networks that are trained independently. (iv) In the feature-training regime we identify a time scale t 1 h α , such that for tt 1 the dynamics is linear. At tt 1, the output has grown by a magnitude h and the changes of the tangent kernel | |ΔΘ| | become significant. Ultimately, it follows | | Δ Θ | | ( h α ) a for ReLU and Softplus activation functions, with a < 2 and a → 2 as depth grows. We provide scaling arguments supporting these findings.

Publisher

IOP Publishing

Subject

Statistics, Probability and Uncertainty,Statistics and Probability,Statistical and Nonlinear Physics

Reference34 articles.

1. High-dimensional dynamics of generalization error in neural networks;Advani,2017

2. A convergence theory for deep learning via over-parameterization;Allen-Zhu,2018

3. On exact computation with an infinitely wide neural ne;Arora,2019

4. Minnorm training: an algorithm for training overcomplete deep neural networks;Bansal,2018

5. Comparing dynamics: deep neural networks versus glassy systems;Baity-Jesi,2018

Cited by 22 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Representations and generalization in artificial and brain neural networks;Proceedings of the National Academy of Sciences;2024-06-24

2. Low-power multimode-fiber projector outperforms shallow-neural-network classifiers;Physical Review Applied;2024-06-12

3. Infinite‐width limit of deep linear neural networks;Communications on Pure and Applied Mathematics;2024-05-06

4. Fading memory as inductive bias in residual recurrent networks;Neural Networks;2024-05

5. On the different regimes of stochastic gradient descent;Proceedings of the National Academy of Sciences;2024-02-20

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3