Reducing Catastrophic Forgetting With Associative Learning: A Lesson From Fruit Flies

Author:

Shen Yang1,Dasgupta Sanjoy2,Navlakha Saket3

Affiliation:

1. Cold Spring Harbor Laboratory, Simons Center for Quantitative Biology, Cold Spring Harbor, NY 11724, U.S.A. yashen@cshl.edu

2. Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA 92093, U.S.A. dasgupta@eng.ucsd.edu

3. Cold Spring Harbor Laboratory, Simons Center for Quantitative Biology, Cold Spring Harbor, NY 11724, U.S.A. navlakha@cshl.edu

Abstract

Abstract Catastrophic forgetting remains an outstanding challenge in continual learning. Recently, methods inspired by the brain, such as continual representation learning and memory replay, have been used to combat catastrophic forgetting. Associative learning (retaining associations between inputs and outputs, even after good representations are learned) plays an important function in the brain; however, its role in continual learning has not been carefully studied. Here, we identified a two-layer neural circuit in the fruit fly olfactory system that performs continual associative learning between odors and their associated valences. In the first layer, inputs (odors) are encoded using sparse, high-dimensional representations, which reduces memory interference by activating nonoverlapping populations of neurons for different odors. In the second layer, only the synapses between odor-activated neurons and the odor’s associated output neuron are modified during learning; the rest of the weights are frozen to prevent unrelated memories from being overwritten. We prove theoretically that these two perceptron-like layers help reduce catastrophic forgetting compared to the original perceptron algorithm, under continual learning. We then show empirically on benchmark data sets that this simple and lightweight architecture outperforms other popular neural-inspired algorithms when also using a two-layer feedforward architecture. Overall, fruit flies evolved an efficient continual associative learning algorithm, and circuit mechanisms from neuroscience can be translated to improve machine computation.

Publisher

MIT Press

Subject

Cognitive Neuroscience,Arts and Humanities (miscellaneous)

Reference78 articles.

1. How can we be so dense? The benefits of using highly sparse representations;Ahmad;CoRR,2019

2. The neuronal architecture of the mushroom body provides a logic for associative learning;Aso;eLife,2014

3. Dopaminergic neurons write and update memories with cell-type-specific rules;Aso;eLife,2016

4. Sparseness and expansion in sensory representations;Babadi;Neuron,2014

5. Computational principles of synaptic memory consolidation;Benna;Nature Neuroscience,2016

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. On Design Choices in Similarity-Preserving Sparse Randomized Embeddings;2024 International Joint Conference on Neural Networks (IJCNN);2024-06-30

2. Sensory encoding and memory in the mushroom body: signals, noise, and variability;Learning & Memory;2024-05

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3