Affiliation:
1. Gandhigram Rural Institute, Dindigul, India
2. Information and Communication Technology Department, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
Abstract
Deep learning is reaching new heights as a result of its cutting-edge performance in a variety of fields, including computer vision, natural language processing, time series analysis, and healthcare. Deep learning is implemented using batch and stochastic gradient descent methods, as well as a few optimizers; however, this led to subpar model performance. However, there is now a lot of effort being done to improve deep learning’s performance using gradient optimization methods. The suggested work analyses convolutional neural networks (CNN) and deep neural networks (DNN) using several cutting-edge optimizers to enhance the performance of architectures. This work uses specific optimizers (SGD, RMSprop, Adam, Adadelta, etc.) to enhance the performance of designs using different types of datasets for result matching. A thorough report on the optimizers’ performance across a variety of architectures and datasets finishes the study effort. This research will be helpful to researchers in developing their framework and appropriate architecture optimizers. The proposed work involves eight new optimizers using four CNN and DNN architectures. The experimental results exploit breakthrough results for improving the efficiency of CNN and DNN architectures using various datasets.
Subject
Computer Networks and Communications,Information Systems
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献