Author:
Wang Baiyang,Klabjan Diego
Abstract
Unsupervised neural networks, such as restricted Boltzmann machines (RBMs) and deep belief networks (DBNs), are powerful tools for feature selection and pattern recognition tasks. We demonstrate that overfitting occurs in such models just as in deep feedforward neural networks, and discuss possible regularization methods to reduce overfitting. We also propose a "partial" approach to improve the efficiency of Dropout/DropConnect in this scenario, and discuss the theoretical justification of these methods from model convergence and likelihood bounds. Finally, we compare the performance of these methods based on their likelihood and classification error rates for various pattern recognition data sets.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Training Data Augmentation with Data Distilled by Principal Component Analysis;Electronics;2024-01-08
2. Thai Handwritten Character Recognition Using Deep Convolutional Neural Network;2023 8th International Conference on Computer and Communication Systems (ICCCS);2023-04-21
3. An Unsupervised Deep Unfolding Framework for Robust Symbol-Level Precoding;IEEE Open Journal of the Communications Society;2023
4. Data enhancement analysis for deep-based image classification;International Conference on Cloud Computing, Performance Computing, and Deep Learning (CCPCDL 2022);2022-10-13
5. Fine-Tuning Dropout Regularization in Energy-Based Deep Learning;Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications;2021