Author:
Wang Yongjie,Hu Minghao,Xu Xiantao,Luo Wei,Luo Zhunchen
Publisher
Springer Nature Switzerland
Reference18 articles.
1. an, Y.L.: Roberta: A robustly optimized BERT pretraining approach. ArXiv preprint abs/1907.11692 (2019). https://arxiv.org/abs/1907.11692
2. Casanueva, I., Temčinas, T., Gerz, D., Henderson, M., Vulić, I.: Efficient intent detection with dual sentence encoders. In: Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.nlp4convai-1.5
3. Chen, L., Jose, S.T., Nikoloska, I., Park, S., Chen, T., Simeone, O., et al.: Learning with limited samples: Meta-learning and applications to communication systems. Found. Trends® Signal Process. 17(2), 79–208 (2023)
4. Hospedales, T., Antoniou, A., Micaelli, P., Storkey, A.: Meta-learning in neural networks: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44(9), 5149–5169 (2021)
5. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)