Deterministic convergence of complex mini-batch gradient learning algorithm for fully complex-valued neural networks
Author:
Publisher
Elsevier BV
Subject
Artificial Intelligence,Cognitive Neuroscience,Computer Science Applications
Reference44 articles.
1. Learning Representations by Backpropagating Errors;Rumelhart,1988
2. simple neural nets for handwritten digit recognition;Ciresan;Neural Comput.,2010
3. Theoretical analysis of batch and on-line training for gradient descent learning in neural networks;Nakama;Neurocomputing,2009
4. The general inefficiency of batch training for gradient descent learning;Wilson;Neural Netw.,2003
5. X. Peng, L. Li, F. Wang, Accelerating minibatch stochastic gradient descent using typicality sampling, IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2019.2957003.
Cited by 11 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. A hybrid complex spectral conjugate gradient learning algorithm for complex-valued data processing;Engineering Applications of Artificial Intelligence;2024-07
2. Boundedness and Convergence of Mini-batch Gradient Method with Cyclic Dropconnect and Penalty;Neural Processing Letters;2024-03-19
3. DAS-VSP NOISE ELIMINATION BASED ON THE DILATED PYRAMID ATTENTION NETWORK;INT J INNOV COMPUT I;2023
4. Adaptive orthogonal gradient descent algorithm for fully complex-valued neural networks;Neurocomputing;2023-08
5. Formal convergence analysis on deterministic ℓ1-regularization based mini-batch learning for RBF networks;Neurocomputing;2023-05
1.学者识别学者识别
2.学术分析学术分析
3.人才评估人才评估
"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370
www.globalauthorid.com
TOP
Copyright © 2019-2024 北京同舟云网络信息技术有限公司 京公网安备11010802033243号 京ICP备18003416号-3