Unsupervised learning for robust working memory

Author:

Gu JintaoORCID,Lim Sukbin

Abstract

AbstractWorking memory is a core component of critical cognitive functions such as planning and decision-making. Persistent activity that lasts long after the stimulus offset has been considered a neural substrate for working memory. Attractor dynamics based on network interactions can successfully reproduce such persistent activity. However, it suffers from a fine-tuning of network connectivity, in particular, to form continuous attractors suggested for working memory encoding analog signals. Here, we investigate whether a specific form of synaptic plasticity rules can mitigate such tuning problems in two representative working memory models, namely, rate-coded and location-coded persistent activity. We consider two prominent types of plasticity rules, differential plasticity targeting the slip of instant neural activity and homeostatic plasticity regularizing the long-term average of activity, both of which have been proposed to fine-tune the weights in an unsupervised manner. Consistent with the findings of previous works, differential plasticity alone was enough to recover a graded-level persistent activity with less sensitivity to learning parameters. However, for the maintenance of spatially structured persistent activity, differential plasticity could recover persistent activity, but its pattern can be irregular for different stimulus locations. On the other hand, homeostatic plasticity shows a robust recovery of smooth spatial patterns under particular types of synaptic perturbations, such as perturbations in incoming synapses onto the entire or local populations, while it was not effective against perturbations in outgoing synapses from local populations. Instead, combining it with differential plasticity recovers location-coded persistent activity for a broader range of perturbations, suggesting compensation between two plasticity rules.Author SummaryWhile external error and reward signals are essential for supervised and reinforcement learning, they are not always available. For example, when an animal holds a piece of information in mind for a short delay period in the absence of the original stimulus, it cannot generate an error signal by comparing its memory representation with the stimulus. Thus, it might be helpful to utilize an internal signal to guide learning. Here, we investigate the role of unsupervised learning for working memory maintenance, which acts during the delay period without external inputs. We consider two prominent classes of learning rules, namely, differential plasticity, which targets the slip of instant neural activity, and homeostatic plasticity, which regularizes the long-term average of activity. The two learning rules have been proposed to fine-tune the synaptic weights without external teaching signals. Here, by comparing their performance under various types of network perturbations, we reveal the conditions under which each rule can be effective and suggest possible synergy between them.

Publisher

Cold Spring Harbor Laboratory

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3