Affiliation:
1. Academy of Mathematics and Systems Science, Chinese Academy of Sciences and School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing, China
Abstract
Low-rank approximation models of data matrices have become important machine learning and data mining tools in many fields, including computer vision, text mining, bioinformatics, and many others. They allow for embedding high-dimensional data into low-dimensional spaces, which mitigates the effects of noise and uncovers latent relations. In order to make the learned representations inherit the structures in the original data, graph-regularization terms are often added to the loss function. However, the prior graph construction often fails to reflect the true network connectivity and the intrinsic relationships. In addition, many graph-regularized methods fail to take the dual spaces into account. Probabilistic models are often used to model the distribution of the representations, but most of previous methods often assume that the hidden variables are independent and identically distributed for simplicity. To this end, we propose a learnable graph-regularization model for matrix decomposition (LGMD), which builds a bridge between graph-regularized methods and probabilistic matrix decomposition models for the first time. LGMD incorporates two graphical structures (i.e., two precision matrices) learned in an iterative manner via sparse precision matrix estimation and is more robust to noise and missing entries. Extensive numerical results and comparison with competing methods demonstrate its effectiveness.
Funder
National Key R&D Program of China
National Natural Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Reference60 articles.
1. A generalized least-square matrix decomposition;Allen Genevera I.;Journal of the American Statistical Association,2014
2. Christopher Bishop. 1999. Variational principal components. In Proceedings of the 9th International Conference on Artificial Neural Networks. Vol. 1, IET, 509–514.
3. Large-scale sparse inverse covariance matrix estimation;Bollhöfer Matthias;SIAM Journal on Scientific Computing,2019
4. Graph regularized nonnegative matrix factorization for data representation;Cai Deng;IEEE Transactions on Pattern Analysis and Machine Intelligence,2011
5. Estimating sparse precision matrix: Optimal rates of convergence and adaptive estimation;Cai Tony;The Annals of Statistics,2016