Learning to learn for few-shot continual active learning

Author:

Ho Stella,Liu Ming,Gao Shang,Gao Longxiang

Abstract

AbstractContinual learning strives to ensure stability in solving previously seen tasks while demonstrating plasticity in a novel domain. Recent advances in continual learning are mostly confined to a supervised learning setting, especially in NLP domain. In this work, we consider a few-shot continual active learning setting where labeled data are inadequate, and unlabeled data are abundant but with a limited annotation budget. We exploit meta-learning and propose a method, called Meta-Continual Active Learning. This method sequentially queries the most informative examples from a pool of unlabeled data for annotation to enhance task-specific performance and tackles continual learning problems through a meta-objective. Specifically, we employ meta-learning and experience replay to address inter-task confusion and catastrophic forgetting. We further incorporate textual augmentations to avoid memory over-fitting caused by experience replay and sample queries, thereby ensuring generalization. We conduct extensive experiments on benchmark text classification datasets from diverse domains to validate the feasibility and effectiveness of meta-continual active learning. We also analyze the impact of different active learning strategies on various meta continual learning models. The experimental results demonstrate that introducing randomness into sample selection is the best default strategy for maintaining generalization in meta-continual learning framework.

Funder

Deakin University

Publisher

Springer Science and Business Media LLC

Reference44 articles.

1. Adel T, Zhao H, Turner RE (2020) Continual learning with adaptive weights (CLAW). In: 8th international conference on learning representations, ICLR 2020, Addis Ababa, Ethiopia, April 26–30, 2020

2. Andrychowicz M, Denil M, Colmenarejo SG, Hoffman MW, Pfau D, Schaul T, Freitas N (2016) Learning to learn by gradient descent by gradient descent. In: Lee DD, Sugiyama M, Luxburg U, Guyon I, Garnett R (eds.) Advances in neural information processing systems 29: annual conference on neural information processing systems 2016, Dec 5–10, 2016, Barcelona, Spain, pp. 3981–3989

3. Ayub A, Fendley C (2022) Few-shot continual active learning by a robot. In: Koyejo S, Mohamed S, Agarwal A, Belgrave D, Cho K, Oh A (eds.) Advances in neural information processing systems 35: annual conference on neural information processing systems 2022, NeurIPS 2022, New Orleans, LA, USA, Nov 28–Dec 9, 2022

4. Bachman P, Alsharif O, Precup D (2014) Learning with pseudo-ensembles. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds.) advances in neural information processing systems 27: annual conference on neural information processing systems 2014, Dec 8–13 2014, Montreal, Quebec, Canada, pp. 3365–3373

5. Beaulieu S, Frati L, Miconi T, Lehman J, Stanley KO, Clune J, Cheney N (2020) In: ECAI 2020 - 24th European conference on artificial intelligence, 29 August-8 September 2020, Santiago de Compostela, Spain, Aug 29–Sept 8, 2020 - Including 10th conference on prestigious applications of artificial intelligence (PAIS 2020). Frontiers in Artificial Intelligence and Applications, vol. 325, pp. 992–1001. IOS Press

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3