Large deviations in the perceptron model and consequences for active learning
-
Published:2021-07-15
Issue:4
Volume:2
Page:045001
-
ISSN:2632-2153
-
Container-title:Machine Learning: Science and Technology
-
language:
-
Short-container-title:Mach. Learn.: Sci. Technol.
Author:
Cui HORCID,
Saglietti L,
Zdeborová L
Abstract
Abstract
Active learning (AL) is a branch of machine learning that deals with problems where unlabeled data is abundant yet obtaining labels is expensive. The learning algorithm has the possibility of querying a limited number of samples to obtain the corresponding labels, subsequently used for supervised learning. In this work, we consider the task of choosing the subset of samples to be labeled from a fixed finite pool of samples. We assume the pool of samples to be a random matrix and the ground truth labels to be generated by a single-layer teacher random neural network. We employ replica methods to analyze the large deviations for the accuracy achieved after supervised learning on a subset of the original pool. These large deviations then provide optimal achievable performance boundaries for any AL algorithm. We show that the optimal learning performance can be efficiently approached by simple message-passing AL algorithms. We also provide a comparison with the performance of some other popular active learning strategies.
Funder
H2020 European Research Council
Subject
Artificial Intelligence,Human-Computer Interaction,Software
Reference38 articles.
1. Active learning literature survey computer sciences;Settles,2009
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献