Author:
Akbay Abdullah B,Zhang Junshan
Abstract
We consider a distributed learning setting where strategic users are incentivized by a fusion center, to train a learning model based on local data. The users are not obliged to provide their true gradient updates and the fusion center is not capable of validating the authenticity of reported updates. Thus motivated, we formulate the interactions between the fusion center and the users as repeated games, manifesting an under-explored interplay between machine learning and game theory. We then develop an incentive mechanism for the fusion center based on a joint gradient estimation and user action classification scheme, and study its impact on the convergence performance of distributed learning. Further, we devise adaptive zero-determinant (ZD) strategies, thereby generalizing the classical ZD strategies to the repeated games with time-varying stochastic errors. Theoretical and empirical analysis show that the fusion center can incentivize the strategic users to cooperate and report informative gradient updates, thus ensuring the convergence.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Incentivizing Participation in SplitFed Learning: Convergence Analysis and Model Versioning;2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS);2024-07-23
2. Generosity Pays Off: A Game-Theoretic Study of Cooperation in Decentralized Learning;2024 IEEE International Conference on Communications Workshops (ICC Workshops);2024-06-09
3. Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents;2022 56th Asilomar Conference on Signals, Systems, and Computers;2022-10-31