The decimation scheme for symmetric matrix factorization

Author:

Camilli FrancescoORCID,Mézard MarcORCID

Abstract

Abstract Matrix factorization is an inference problem that has acquired importance due to its vast range of applications that go from dictionary learning to recommendation systems and machine learning with deep networks. The study of its fundamental statistical limits represents a true challenge, and despite a decade-long history of efforts in the community, there is still no closed formula able to describe its optimal performances in the case where the rank of the matrix scales linearly with its size. In the present paper, we study this extensive rank problem, extending the alternative ‘decimation’ procedure that was recently introduced by Camilli and Mézard, and carry out a thorough study of its performance. Decimation aims at recovering one column/line of the factors at a time, by mapping the problem into a sequence of neural network models of associative memory at a tunable temperature. The main advantage of decimation is that its theoretical performance can be studied using statistical physics techniques. In this paper we provide the general analysis applying to a large class of compactly supported priors and we show that the replica symmetric free entropy of the neural network models takes a universal form in the low temperature limit. We then study the decimation phase diagrams in two concrete cases, the sparse Ising prior and the uniform prior, for which we show that matrix factorization is theoretically possible when the ration P / N of rank to variable is below a certain threshold. This suggest that a possible route to solving algorithmically the matrix factorization problem could be to find efficient decimation algorithms; in this respect, we show that simple simulated annealing, though being effective for limited signal sizes, is not scalable.

Funder

NextGeneration EU

Publisher

IOP Publishing

Reference72 articles.

1. Emergence of simple-cell receptive field properties by learning a sparse code for natural images;Olshausen;Nature,1996

2. Sparse coding with an overcomplete basis set: a strategy employed by v1?;Olshausen;Vis. Res.,1997

3. Dictionary learning algorithms for sparse representation;Kreutz-Delgado;Neural Comput.,2003

4. Online dictionary learning for sparse coding;Mairal,2009

5. Probabilistic matrix factorization;Mnih,2007

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3