Abstract
Abstract
We provide a construction for categorical representation learning and introduce the foundations of ‘categorifier’. The central theme in representation learning is the idea of everything to vector. Every object in a dataset
S
can be represented as a vector in
R
n
by an encoding map
E
:
O
b
j
(
S
)
→
R
n
. More importantly, every morphism can be represented as a matrix
E
:
H
o
m
(
S
)
→
R
n
n
. The encoding map E is generally modeled by a deep neural network. The goal of representation learning is to design appropriate tasks on the dataset to train the encoding map (assuming that an encoding is optimal if it universally optimizes the performance on various tasks). However, the latter is still a set-theoretic approach. The goal of the current article is to promote the representation learning to a new level via a category-theoretic approach. As a proof of concept, we provide an example of a text translator equipped with our technology, showing that our categorical learning model outperforms the current deep learning models by 17 times. The content of the current article is part of a US provisional patent application filed by QGNai, Inc.
Subject
Artificial Intelligence,Human-Computer Interaction,Software
Reference7 articles.
1. Representation learning: a review and new perspectives;Bengio;IEEE Trans. Pattern Anal. Mach. Intell.,2013
2. The graph neural network model;Scarselli;IEEE Trans. Neural Netw.,2008
3. Graph neural networks: a review of methods and applications;Zhou,2018
4. Attention is all you need;Vaswani,2017
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献