Archetypal landscapes for deep neural networks

Author:

Verpoort Philipp C.ORCID,Lee Alpha A.,Wales David J.ORCID

Abstract

The predictive capabilities of deep neural networks (DNNs) continue to evolve to increasingly impressive levels. However, it is still unclear how training procedures for DNNs succeed in finding parameters that produce good results for such high-dimensional and nonconvex loss functions. In particular, we wish to understand why simple optimization schemes, such as stochastic gradient descent, do not end up trapped in local minima with high loss values that would not yield useful predictions. We explain the optimizability of DNNs by characterizing the local minima and transition states of the loss-function landscape (LFL) along with their connectivity. We show that the LFL of a DNN in the shallow network or data-abundant limit is funneled, and thus easy to optimize. Crucially, in the opposite low-data/deep limit, although the number of minima increases, the landscape is characterized by many minima with similar loss values separated by low barriers. This organization is different from the hierarchical landscapes of structural glass formers and explains why minimization procedures commonly employed by the machine-learning community can navigate the LFL successfully and reach low-lying solutions.

Publisher

Proceedings of the National Academy of Sciences

Subject

Multidisciplinary

Reference63 articles.

1. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play

2. A mean field view of the landscape of two-layer neural networks;Song;Proc. Natl. Acad. Sci. U.S.A.,2018

3. The loss surfaces of multilayer networks;Choromanska,2015

4. S. Hochreiter , J. Schmidhuber , “Simplifying neural nets by discovering flat minima” in NIPS’94: Proceedings of the 7th International Conference on Neural Information Processing Systems, G. Tesauro , D. S. Touretzky , T. K. Leen , Eds. (MIT Press, Cambridge, MA, 1995), pp. 529–536.

5. Flat Minima

Cited by 13 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Explainable Gaussian processes: a loss landscape perspective;Machine Learning: Science and Technology;2024-07-23

2. Insights into machine learning models from chemical physics: an energy landscapes approach (EL for ML);Digital Discovery;2024

3. Microscopic image recognition of diatoms based on deep learning;Journal of Phycology;2023-11-23

4. Exploring Gradient Oscillation in Deep Neural Network Training;2023 59th Annual Allerton Conference on Communication, Control, and Computing (Allerton);2023-09-26

5. Data efficiency and extrapolation trends in neural network interatomic potentials;Machine Learning: Science and Technology;2023-08-25

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3