Abstract
Regularized sparse learning with the ℓ0-norm is important in many areas, including statistical learning and signal processing. Iterative hard thresholding (IHT) methods are the state-of-the-art for nonconvex-constrained sparse learning due to their capability of recovering true support and scalability with large datasets. The current theoretical analysis of IHT assumes the use of centralized IID data. In realistic large-scale scenarios, however, data are distributed, seldom IID, and private to edge computing devices at the local level. Consequently, it is required to study the property of IHT in a federated environment, where local devices update the sparse model individually and communicate with a central server for aggregation infrequently without sharing local data. In this paper, we propose the first group of federated IHT methods: Federated Hard Thresholding (Fed-HT) and Federated Iterative Hard Thresholding (FedIter-HT) with theoretical guarantees. We prove that both algorithms have a linear convergence rate and guarantee for recovering the optimal sparse estimator, which is comparable to classic IHT methods, but with decentralized, non-IID, and unbalanced data. Empirical results demonstrate that the Fed-HT and FedIter-HT outperform their competitor—a distributed IHT, in terms of reducing objective values with fewer communication rounds and bandwidth requirements.
Funder
National Science Foundation
National Institutes of Health
Subject
Computational Mathematics,Computational Theory and Mathematics,Numerical Analysis,Theoretical Computer Science
Reference48 articles.
1. Bayesian and l1 approaches to sparse unsupervised learning;Mohamed;arXiv,2011
2. Transfer learning for image classification with sparse prototype representations;Quattoni;Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition,2008
3. MR image super-resolution via manifold regularized sparse learning
4. Graph non-negative matrix factorization with alternative smoothed $$L_0$$ regularizations
5. Learning Sparsifying Transforms