Abstract
PurposeIntent detection (ID) and slot filling (SF) are two important tasks in natural language understanding. ID is to identify the main intent of a paragraph of text. The goal of SF is to extract the information that is important to the intent from the input sentence. However, most of the existing methods use sentence-level intention recognition, which has the risk of error propagation, and the relationship between intention recognition and SF is not explicitly modeled. Aiming at this problem, this paper proposes a collaborative model of ID and SF for intelligent spoken language understanding called ID-SF-Fusion.Design/methodology/approachID-SF-Fusion uses Bidirectional Encoder Representation from Transformers (BERT) and Bidirectional Long Short-Term Memory (BiLSTM) to extract effective word embedding and context vectors containing the whole sentence information respectively. Fusion layer is used to provide intent–slot fusion information for SF task. In this way, the relationship between ID and SF task is fully explicitly modeled. This layer takes the result of ID and slot context vectors as input to obtain the fusion information which contains both ID result and slot information. Meanwhile, to further reduce error propagation, we use word-level ID for the ID-SF-Fusion model. Finally, two tasks of ID and SF are realized by joint optimization training.FindingsWe conducted experiments on two public datasets, Airline Travel Information Systems (ATIS) and Snips. The results show that the Intent ACC score and Slot F1 score of ID-SF-Fusion on ATIS and Snips are 98.0 per cent and 95.8 per cent, respectively, and the two indicators on Snips dataset are 98.6 per cent and 96.7 per cent, respectively. These models are superior to slot-gated, SF-ID NetWork, stack-Prop and other models. In addition, ablation experiments were performed to further analyze and discuss the proposed model.Originality/valueThis paper uses word-level intent recognition and introduces intent information into the SF process, which is a significant improvement on both data sets.
Reference29 articles.
1. A survey on dialogue systems: recent advances and new frontiers;ACM SIGKDD Explorations Newsletter,2017
2. Chen, Q., Zhuo, Z. and Wang, W. (2019), “BERT for joint intent detection and slot filling”, arXiv preprint arXiv:1902.10909, available at: https://arxiv.org/pdf/1902.10909.pdf (accessed 21 April 2022).
3. Survey on evaluation methods for dialogue systems;Artificial Intelligence Review,2021
4. Devlin, J., Chang, M.W., Lee, K. and Toutanova, K. (2019), “BERT: pre-training of deep bidirectional transformers for language understanding”, arXiv preprint arXiv.1810.04805. available at: https://doi.org/10.48550/arXiv.1810.04805 (accessed 30 July 2022).
5. Intent-slot correlation modeling for joint intent prediction and slot filling;Journal Of Computer Science And Technology,2022