Abstract
A main barrier for autonomous and general learning systems is their inability to understand and adapt to new environments—that is, to apply previously learned abstract solutions to new problems. Supervised learning system functions such as classification require data labeling from an external source and do not have the ability to learn feature representation autonomously. This research details an unsupervised learning method for multi-modal feature detection and evaluation to be used for preprocessing in general learning systems. The learning method details a clustering algorithm that can be applied to any generic IoT sensor data, and a seeded stimulus labeling algorithm impacted and evolved by cross-modal input. The method is implemented and tested in two agents consuming audio and image data, each with varying innate stimulus criteria. Their run-time stimulus changes over time depending on their experiences, while newly experienced features become meaningful without preprogrammed labeling of distinct attributes. The architecture provides interfaces for higher-order cognitive processes to be built on top of the unsupervised preprocessor. This method is unsupervised and modular, in contrast to the highly constrained and pretrained learning systems that exist, making it extendable and well-disposed for use in artificial general intelligence.
Funder
Natural Sciences and Engineering Research Council
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献