A Review on Dropout Regularization Approaches for Deep Neural Networks within the Scholarly Domain

Author:

Salehin Imrus1ORCID,Kang Dae-Ki1ORCID

Affiliation:

1. Department of Computer Engineering, Dongseo University, 47 Jurye-ro, Sasang-gu, Busan 47011, Republic of Korea

Abstract

Dropout is one of the most popular regularization methods in the scholarly domain for preventing a neural network model from overfitting in the training phase. Developing an effective dropout regularization technique that complies with the model architecture is crucial in deep learning-related tasks because various neural network architectures have been proposed, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs), and they have exhibited reasonable performance in their specialized areas. In this paper, we provide a comprehensive and novel review of the state-of-the-art (SOTA) in dropout regularization. We explain various dropout methods, from standard random dropout to AutoDrop dropout (from the original to the advanced), and also discuss their performance and experimental capabilities. This paper provides a summary of the latest research on various dropout regularization techniques for achieving improved performance through “Internal Structure Changes”, “Data Augmentation”, and “Input Information”. We can see that proper regularization with respect to structural constraints of network architecture is a critical factor to facilitate overfitting avoidance. We discuss the strengths and limitations of the methods presented in this work, which can serve as valuable references for future research and the development of new approaches. We also pay attention to the scholarly domain in the discussion in order to meet the overwhelming increase of scientific research outcomes by providing an analysis of several important academic scholarly issues of neural networks.

Funder

National Research Foundation of Korea

Publisher

MDPI AG

Subject

Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering

Reference99 articles.

1. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R.R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. arXiv.

2. Dropout: A simple way to prevent neural networks from overfitting;Srivastava;J. Mach. Learn. Res.,2014

3. Regularization of Neural Networks using DropConnect;Dasgupta;Proceedings of Machine Learning Research, Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2023,2013

4. Dropblock: A regularization method for convolutional networks;Ghiasi;Adv. Neural Inf. Process. Syst.,2018

5. Skipout: An Adaptive Layer-Level Regularization Framework for Deep Neural Networks;Moayed;IEEE Access,2022

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3