Affiliation:
1. University of Central Florida, Orlando, Florida
2. Baidu Inc., Beijing, China
3. Missouri University of Science and Technology, Rolla, Missouri
Abstract
Sparse Discriminant Analysis (SDA) has been widely used to improve the performance of classical Fisher’s Linear Discriminant Analysis in supervised metric learning, feature selection, and classification. With the increasing needs of distributed data collection, storage, and processing, enabling the Sparse Discriminant Learning to embrace the multi-party distributed computing environments becomes an emerging research topic. This article proposes a novel multi-party SDA algorithm, which can learn SDA models effectively without sharing any raw data and basic statistics among machines. The proposed algorithm (1) leverages the direct estimation of SDA to derive a distributed loss function for the discriminant learning, (2) parameterizes the distributed loss function with local/global estimates through bootstrapping, and (3) approximates a global estimation of linear discriminant projection vector by optimizing the “distributed bootstrapping loss function” with gossip-based stochastic gradient descent. Experimental results on both synthetic and real-world benchmark datasets show that our algorithm can compete with the aggregated SDA with similar performance, and significantly outperforms the most recent distributed SDA in terms of accuracy and F1-score.
Funder
NSF: RAISE: CA-FW-HTF: Prepare the US Labor Force for Future Jobs in the Hotel and Restaurant Industry: A Hybrid Framework and Multi-Stakeholder Approach
NSF: CRII: CSR: NeuroMC---Parallel Online Scheduling of Mixed-Criticality Real-Time Systems via Neural Networks
Publisher
Association for Computing Machinery (ACM)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献