Abstract
Recently proposed pre-trained language models can be easily fine-tuned to a wide range of downstream tasks. However, fine-tuning requires a large training set. This PhD project introduces novel natural language processing (NLP) use cases in the healthcare domain where obtaining a large training dataset is difficult and expensive. To this end, we propose data-efficient algorithms to fine-tune NLP models in low-resource settings and validate their effectiveness. We expect the outcomes of this PhD project could contribute to the NLP research and low-resource application domains.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Incremental Soft Pruning to Get the Sparse Neural Network During Training;2024 International Joint Conference on Neural Networks (IJCNN);2024-06-30