Abstract
We generalize algorithms from computational learning theory that are successful under the uniform distribution on the Boolean hypercube {0, 1}
n
to algorithms successful on permutation-invariant distributions, distributions that stay invariant constant on permutating the coordinates in the instances. While the tools in our generalization mimic those used for the Boolean hypercube, the fact that permutation-invariant distributions are not product distributions presents a significant obstacle.
We prove analogous results for permutation-invariant distributions; more generally, we work in the domain of the symmetric group. We define noise sensitivity in this setting and show that noise sensitivity has a nice combinatorial interpretation in terms of Young tableaux. The main technical innovations involve techniques from the representation theory of the symmetric group, especially the combinatorics of Young tableaux. We show that low noise sensitivity implies concentration on “simple” components of the Fourier spectrum and that this fact will allow us to agnostically learn halfspaces under permutation-invariant distributions to constant accuracy in roughly the same time as in the uniform distribution over the Boolean hypercube case.
Funder
National Science Foundation
Publisher
Association for Computing Machinery (ACM)
Subject
Mathematics (miscellaneous)
Reference22 articles.
1. Polynomial regression under arbitrary product distributions
2. Learning using group representations (extended abstract)
3. Persi Diaconis. 1988. Group representations in probability and statistics. Lecture Notes-Monograph Series 11 (1988) i--192. http://www.jstor.org/stable/10.2307/4355560. Persi Diaconis. 1988. Group representations in probability and statistics. Lecture Notes-Monograph Series 11 (1988) i--192. http://www.jstor.org/stable/10.2307/4355560.
4. Intersecting families of permutations