Explaining neural scaling laws

Author:

Bahri Yasaman1,Dyer Ethan1,Kaplan Jared2,Lee Jaehoon1ORCID,Sharma Utkarsh2

Affiliation:

1. Google DeepMind, Mountain View, CA 94043

2. Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218

Abstract

The population loss of trained deep neural networks often follows precise power-law scaling relations with either the size of the training dataset or the number of parameters in the network. We propose a theory that explains the origins of and connects these scaling laws. We identify variance-limited and resolution-limited scaling behavior for both dataset and model size, for a total of four scaling regimes. The variance-limited scaling follows simply from the existence of a well-behaved infinite data or infinite width limit, while the resolution-limited regime can be explained by positing that models are effectively resolving a smooth data manifold. In the large width limit, this can be equivalently obtained from the spectrum of certain kernels, and we present evidence that large width and large dataset resolution-limited scaling exponents are related by a duality. We exhibit all four scaling regimes in the controlled setting of large random feature and pretrained models and test the predictions empirically on a range of standard architectures and datasets. We also observe several empirical relationships between datasets and scaling exponents under modifications of task and architecture aspect ratio. Our work provides a taxonomy for classifying different scaling regimes, underscores that there can be different mechanisms driving improvements in loss, and lends insight into the microscopic origin and relationships between scaling exponents.

Publisher

Proceedings of the National Academy of Sciences

Reference62 articles.

1. J. Hestness et al. Deep learning scaling is predictable empirically. arXiv [Preprint] (2017). https://doi.org/10.48550/arXiv.1712.00409 (Accessed 1 January 2021).

2. J. Kaplan et al. Scaling laws for neural language models. arXiv [Preprint] (2020). https://doi.org/10.48550/arXiv.2001.08361 (Accessed 1 January 2021).

3. J. S. Rosenfeld A. Rosenfeld Y. Belinkov N. Shavit “A constructive prediction of the generalization error across scales” in International Conference on Learning Representations (2020).

4. T. Henighan et al. Scaling laws for autoregressive generative modeling. arXiv [Preprint] (2020). https://doi.org/10.48550/arXiv.2010.14701 (Accessed 1 January 2021).

5. J. S. Rosenfeld J. Frankle M. Carbin N. Shavit “On the predictability of pruning across scales” in International Conference on Machine Learning (PMLR 2021) pp. 9075–9083.

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Training from Zero: Forecasting of Radio Frequency Machine Learning Data Quantity;Telecom;2024-07-18

2. Machine learning meets physics: A two-way street;Proceedings of the National Academy of Sciences;2024-06-24

3. Enhancing ASR Performance through Relative Word Frequency in OCR and Normal Word Frequency Analysis;2024 IEEE 6th International Conference on AI Circuits and Systems (AICAS);2024-04-22

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3