Abstract
We consider information-theoretic bounds on the expected generalization error for statistical learning problems in a network setting. In this setting, there are K nodes, each with its own independent dataset, and the models from the K nodes have to be aggregated into a final centralized model. We consider both simple averaging of the models as well as more complicated multi-round algorithms. We give upper bounds on the expected generalization error for a variety of problems, such as those with Bregman divergence or Lipschitz continuous losses, that demonstrate an improved dependence of 1/K on the number of nodes. These “per node” bounds are in terms of the mutual information between the training dataset and the trained weights at each node and are therefore useful in describing the generalization properties inherent to having communication or privacy constraints at each node.
Funder
National Science Foundation
Subject
General Physics and Astronomy
Reference21 articles.
1. How Much Does Your Data Exploration Overfit? Controlling Bias via Information Usage
2. Information-Theoretic Analysis of Generalization Capability of Learning Algorithms;Xu;Adv. Neural Inf. Process. Syst.,2017
3. Tightening Mutual Information-Based Bounds on Generalization Error
4. Tighter Expected Generalization Error Bounds via Convexity of Information Measures;Aminian;Proceedings of the 2022 IEEE International Symposium on Information Theory (ISIT),2022
5. Communication-Efficient Learning of Deep Networks from Decentralized Data;McMahan;Proceedings of the 20th International Conference on Artificial Intelligence and Statistics,2017
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献