Affiliation:
1. Pohang University of Science and Technology, Republic of Korea
Abstract
L
2
regularization for weights in neural networks is widely used as a standard training trick. In addition to weights, the use of batch normalization involves an additional trainable parameter
γ
, which acts as a scaling factor. However,
L
2
regularization for
γ
remains an undiscussed mystery and is applied in different ways depending on the library and practitioner. In this paper, we study whether
L
2
regularization for
γ
is valid. To explore this issue, we consider two approaches: 1) variance control to make the residual network behave like an identity mapping and 2) stable optimization through the improvement of effective learning rate. Through two analyses, we specify the desirable and undesirable
γ
to apply
L
2
regularization and propose four guidelines for managing them. In several experiments, we observed that applying
L
2
regularization to applicable
γ
increased 1%–4% classification accuracy, whereas applying
L
2
regularization to inapplicable
γ
decreased 1%–3% classification accuracy, which is consistent with our four guidelines. Our proposed guidelines were further validated through various tasks and architectures, including variants of residual networks and transformers.
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Theoretical Computer Science
Reference37 articles.
1. Lei Jimmy Ba Jamie Ryan Kiros and Geoffrey E. Hinton. 2016. Layer Normalization. CoRR abs/1607.06450(2016).
2. Lukas Bossard Matthieu Guillaumin and Luc Van Gool. 2014. Food-101 - Mining Discriminative Components with Random Forests. In ECCV (6) Vol. 8694. 446–461. https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/
3. Andrew Brock Soham De and Samuel L. Smith. 2021. Characterizing signal propagation to close the performance gap in unnormalized ResNets. In ICLR.
4. Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT evaluation campaign. In Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign. 2–17. https://workshop2014.iwslt.org/
5. Soham De and Samuel L. Smith. 2020. Batch Normalization Biases Residual Blocks Towards the Identity Function in Deep Networks. In NeurIPS.