Abstract
Stochastic gradient descent (SGD) and its variants have become more and more popular in machine learning due to their efficiency and effectiveness. To handle large-scale problems, researchers have recently proposed several parallel SGD methods for multicore systems. However, existing parallel SGD methods cannot achieve satisfactory performance in real applications. In this paper, we propose a fast asynchronous parallel SGD method, called AsySVRG, by designing an asynchronous strategy to parallelize the recently proposed SGD variant called stochastic variance reduced gradient (SVRG). AsySVRG adopts a lock-free strategy which is more efficient than other strategies with locks. Furthermore, we theoretically prove that AsySVRG is convergent with a linear convergence rate. Both theoretical and empirical results show that AsySVRG can outperform existing state-of-the-art parallel SGD methods like Hogwild! in terms of convergence rate and computation cost.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. WBSP: Addressing stragglers in distributed machine learning with worker-busy synchronous parallel;Parallel Computing;2024-09
2. NDPipe: Exploiting Near-data Processing for Scalable Inference and Continuous Training in Photo Storage;Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3;2024-04-27
3. Tree Network Design for Faster Distributed Machine Learning Process with Distributed Dual Coordinate Ascent;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
4. Distributed Dual Coordinate Ascent With Imbalanced Data on a General Tree Network;2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP);2023-09-17
5. A Load-Balancing Strategy Based on Multi-Task Learning in a Distributed Training Environment;2023 International Conference on Advances in Electrical Engineering and Computer Applications (AEECA);2023-08-18