Abstract
AbstractNatural Language Understanding (NLU) components are used in Dialog Systems (DS) to perform intent detection and entity extraction. In this work, we introduce a technique that exploits the inherent relationships between intents and entities to enhance the performance of NLU systems. The proposed method involves the utilization of a carefully crafted set of rules that formally express these relationships. By utilizing these rules, we effectively address inconsistencies within the NLU output, leading to improved accuracy and reliability. We implemented the proposed method using the Rasa framework as an NLU component and used our own conversational dataset AWPS to evaluate the improvement. Then, we validated the results in other three commonly used datasets: ATIS, SNIPS, and NLU-Benchmark. The experimental results show that the proposed method has a positive impact on the semantic accuracy metric, reaching an improvement of 12.6% in AWPS when training with a small amount of data. Furthermore, the practical application of the proposed method can easily be extended to other Task-Oriented Dialog Systems (T-ODS) to boost their performance and enhance user satisfaction.
Funder
Agencia Estatal de Investigación
Ministerio de Ciencia e Innovación
European Social Fund
Valencian regional government
European Union “NextGenerationEU”/PRTR
Universitat de Valencia
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献