Affiliation:
1. Shanghai Jiao Tong University, Shanghai, China
2. Microsoft, Beijing, China
Abstract
Active learning selectively labels the most informative instances, aiming to reduce the cost of data annotation. While much effort has been devoted to active sampling functions, relatively limited attention has been paid to when the learning process should stop. In this article, we focus on the stopping criterion of active learning and propose a model stability--based criterion, that is, when a model does not change with inclusion of additional training instances. The challenge lies in how to measure the model change without labeling additional instances and training new models. Inspired by the stochastic gradient update rule, we use the gradient of the loss function at each candidate example to measure its effect on model change. We propose to stop active learning when the model change brought by any of the remaining unlabeled examples is lower than a given threshold. We apply the proposed stopping criterion to two popular classifiers: logistic regression (LR) and support vector machines (SVMs). In addition, we theoretically analyze the stability and generalization ability of the model obtained by our stopping criterion. Substantial experiments on various UCI benchmark datasets and ImageNet datasets have demonstrated that the proposed approach is highly effective.
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Theoretical Computer Science
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献