Author:
Zhang Tianning,Chen Tianqi,Li Erping,Yang Bo,Ang L. K.
Abstract
The tensor network, as a factorization of tensors, aims at performing the operations that are common for normal tensors, such as addition, contraction, and stacking. However, because of its non-unique network structure, only the tensor network contraction is so far well defined. In this study, we propose a mathematically rigorous definition for the tensor network stack approach that compresses a large number of tensor networks into a single one without changing their structures and configurations. We illustrate the main ideas with the matrix product states based on machine learning as an example. Our results are compared with the for-loop and the efficient coding method on both CPU and GPU.
Subject
Physical and Theoretical Chemistry,General Physics and Astronomy,Mathematical Physics,Materials Science (miscellaneous),Biophysics
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献