Affiliation:
1. Machine Learning Engineer, Azure Sovereign Operations, Microsoft, Redmond, Washington, USA
Abstract
Training supervised machine learning models requires labeled examples. A judicious choice of examples is helpful when there is a significant cost associated with assigning labels. This article improves upon a promising extant method – Batch-mode Expected Model Change Maximization
(B-EMCM) method – for selecting examples to be labeled for regression problems. Specifically, it develops and evaluates alternate strategies for adaptively selecting batch size in B-EMCM.<br/> By determining the cumulative error that occurs from the estimation of the stochastic
gradient descent, a stop criteria for each iteration of the batch can be specified to ensure that selected candidates are the most beneficial to model learning. This new methodology is compared to B-EMCM via mean absolute error and root mean square error over ten iterations benchmarked against
machine learning data sets.<br/> Using multiple data sets and metrics across all methods, one variation of AB-EMCM, the max bound of the accumulated error (AB-EMCM Max), showed the best results for an adaptive batch approach. It achieved better root mean squared error (RMSE) and mean
absolute error (MAE) than the other adaptive and nonadaptive batch methods while reaching the result in nearly the same number of iterations as the non-adaptive batch methods.
Publisher
Society for Makers, Artist, Researchers and Technologists