Abstract
In this study, we demonstrate that an open-domain conversational system trained on idioms or figurative language generates more fitting responses to prompts containing idioms. Idioms are a part of everyday speech in many languages and across many cultures, but they pose a great challenge for many natural language processing (NLP) systems that involve tasks such as information retrieval (IR), machine translation (MT), and conversational artificial intelligence (AI). We utilized the Potential Idiomatic Expression (PIE)-English idiom corpus for the two tasks that we investigated: classification and conversation generation. We achieved a state-of-the-art (SoTA) result of a 98% macro F1 score on the classification task by using the SoTA T5 model. We experimented with three instances of the SoTA dialogue model—the Dialogue Generative Pre-trained Transformer (DialoGPT)—for conversation generation. Their performances were evaluated by using the automatic metric, perplexity, and a human evaluation. The results showed that the model trained on the idiom corpus generated more fitting responses to prompts containing idioms 71.9% of the time in comparison with a similar model that was not trained on the idiom corpus. We have contributed the model checkpoint/demo/code to the HuggingFace hub for public access.
Subject
General Materials Science
Reference51 articles.
1. Investigating Robustness of Dialog Models to Popular Figurative Language Constructs
2. Semeval-2013 task 5: Evaluating phrasal semantics;Korkontzelos,2013
3. Potential Idiomatic Expression (PIE)-English: Corpus for Classes of Idioms;Adewumi;Proceedings of the Thirteenth International Conference on Language Resources and Evaluation (LREC 2022),2022
4. Classifying idiomatic and literal expressions using vector space representations;Peng;Proceedings of the International Conference Recent Advances in Natural Language Processing,2015
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献