Abstract
AbstractHow a system represents information tightly constrains the kinds of problems it can solve. Humans routinely solve problems that appear to require structured representations of stimulus properties and relations. Answering the question of how we acquire these representations has central importance in an account of human cognition. We propose a theory of how a system can learn invariant responses to instances of similarity and relative magnitude, and how structured relational representations can be learned from initially unstructured inputs. We instantiate that theory in the DORA (Discovery of Relations by Analogy) computational framework. The result is a system that learns structured representations of relations from unstructured flat feature vector representations of objects with absolute properties. The resulting representations meet the requirements of human structured relational representations, and the model captures several specific phenomena from the literature on cognitive development. In doing so, we address a major limitation of current accounts of cognition, and provide an existence proof for how structured representations might be learned from experience.
Publisher
Cold Spring Harbor Laboratory
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献