Affiliation:
1. Yunnan Normal University, Kunming, China
Abstract
Test-time augmentation (TTA) is a well-established technique that involves aggregating transformed examples of test inputs during the inference stage. The goal is to enhance model performance and reduce the uncertainty of predictions. Despite its advantages of not requiring additional training or hyperparameter tuning, and being applicable to any existing model, TTA is still in its early stages in the field of NLP. This is partly due to the difficulty of discerning the contribution of different transformed samples, which can negatively impact predictions. In order to address these issues, we propose Selective Test-Time Augmentation, called STTA, which aims to select the most beneficial transformed samples for aggregation by identifying reliable samples. Furthermore, we analyze and empirically verify why TTA is sensitive to some text data augmentation methods and reveal why some data augmentation methods lead to erroneous predictions. Through extensive experiments, we demonstrate that STTA is a simple and effective method that can produce promising results in various text classification tasks.
Funder
The Ten Thousand Talent Plans for Young Top-notch Talents of Yunnan Province
Reference54 articles.
1. Semeval 2018 task 2: multilingual emoji prediction;Barbieri,2018
2. A survey on data augmentation for text classification;Bayer;ACM Computing Surveys,2022
3. Bagging predictors;Breiman;Machine Learning,1996
4. Certified adversarial robustness via randomized smoothing;Cohen,2019
5. Certified adversarial robustness via randomized smoothing;Cohen,2019