FinKENet: A Novel Financial Knowledge Enhanced Network for Financial Question Matching
Author:
Guo Yu1ORCID, Liang Ting2, Chen Zhongpu1ORCID, Yang Binchen1, Wang Jun13, Zhao Yu1ORCID
Affiliation:
1. Financial Intelligence and Financial Engineering Key Laboratory of Sichuan Province, Fintech Innovation Center, Southwestern University of Finance and Economics, Chengdu 611130, China 2. School of Accounting, Southwestern University of Finance and Economics, Chengdu 611130, China 3. School of Management Science and Engineering, Southwestern University of Finance and Economics, Chengdu 611130, China
Abstract
Question matching is the fundamental task in retrieval-based dialogue systems which assesses the similarity between Query and Question. Unfortunately, existing methods focus on improving the accuracy of text similarity in the general domain, without adaptation to the financial domain. Financial question matching has two critical issues: (1) How to accurately model the contextual representation of a financial sentence? (2) How to accurately represent financial key phrases in an utterance? To address these issues, this paper proposes a novel Financial Knowledge Enhanced Network (FinKENet) that significantly injects financial knowledge into contextual text. Specifically, we propose a multi-level encoder to extract both sentence-level features and financial phrase-level features, which can more accurately represent sentences and financial phrases. Furthermore, we propose a financial co-attention adapter to combine sentence features and financial keyword features. Finally, we design a multi-level similarity decoder to calculate the similarity between queries and questions. In addition, a cross-entropy-based loss function is presented for model optimization. Experimental results demonstrate the effectiveness of the proposed method on the Ant Financial question matching dataset. In particular, the Recall score improves from 73.21% to 74.90% (1.69% absolute).
Funder
National Natural Science Foundation of China Sichuan Science and Technology Program Guanghua Talent Project of Southwestern University of Finance and Economics, and Financial Innovation Center, SWUFE International Innovation Project Fundamental Research Funds for the Central Universities
Subject
General Physics and Astronomy
Reference38 articles.
1. Training language models to follow instructions with human feedback;Ouyang;Adv. Neural Inf. Process. Syst.,2022 2. Du, Z., Qian, Y., Liu, X., Ding, M., Qiu, J., Yang, Z., and Tang, J. (2022, January 22–27). GLM: General Language Model Pretraining with Autoregressive Blank Infilling. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland. 3. Zeng, A., Liu, X., Du, Z., Wang, Z., Lai, H., Ding, M., Yang, Z., Xu, Y., Zheng, W., and Xia, X. (2022). Glm-130b: An open bilingual pre-trained model. arXiv. 4. Sun, Y., Wang, S., Feng, S., Ding, S., Pang, C., Shang, J., Liu, J., Chen, X., Zhao, Y., and Lu, Y. (2021). Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. arXiv. 5. Shen, Y., He, X., Gao, J., Deng, L., and Mesnil, G. (2014, January 7–11). Learning semantic representations using convolutional neural networks for web search. Proceedings of the 23rd International Conference on World Wide Web, Seoul, Republic of Korea.
|
|