Affiliation:
1. Virginia Tech, Arlington, VA
2. InterDigital, Los Altos, CA
Abstract
Recent advancements in deep learning techniques have transformed the area of semantic text matching (STM). However, most state-of-the-art models are designed to operate with
short
documents such as tweets, user reviews, comments, and so on. These models have fundamental limitations when applied to long-form documents such as scientific papers, legal documents, and patents. When handling such long documents, there are three primary challenges: (i) the presence of different contexts for the same word throughout the document, (ii) small sections of contextually similar text between two documents, but dissimilar text in the remaining parts (this defies the basic understanding of “similarity”), and (iii) the coarse nature of a single global similarity measure which fails to capture the heterogeneity of the document content. In this article, we describe
CoLDE
:
Co
ntrastive
L
ong
D
ocument
E
ncoder—a transformer-based framework that addresses these challenges and allows for interpretable comparisons of long documents. CoLDE uses unique positional embeddings and a multi-headed chunkwise attention layer in conjunction with a supervised contrastive learning framework to capture similarity at three different levels: (i) high-level similarity scores between a pair of documents, (ii) similarity scores between different sections within and across documents, and (iii) similarity scores between different
chunks
in the same document and across other documents. These fine-grained similarity scores aid in better interpretability. We evaluate CoLDE on three long document datasets namely, ACL Anthology publications, Wikipedia articles, and USPTO patents. Besides outperforming the state-of-the-art methods on the document matching task, CoLDE is also robust to changes in document length and text perturbations and provides interpretable results. The code for the proposed model is publicly available at
https://github.com/InterDigitalInc/CoLDE
.
Funder
US National Science Foundation
Publisher
Association for Computing Machinery (ACM)
Reference45 articles.
1. Ashutosh Adhikari Achyudh Ram Raphael Tang and Jimmy Lin. 2019. Docbert: Bert for document classification. arXiv:1904.08398. Retrieved from https://arxiv.org/abs/1904.08398.
2. A causal framework for explaining the predictions of black-box
sequence-to-sequence models
3. Learning Text Pair Similarity with Context-sensitive Autoencoders
4. Iz Beltagy Matthew E. Peters and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150. Retrieved from https://arxiv.org/abs/2004.05150.
5. Minmin Chen. 2017. Efficient vector representation for documents through corruption. In Proceedings of the ICLR (Poster).
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. HyperMatch: long-form text matching via hypergraph convolutional networks;Knowledge and Information Systems;2024-07-12
2. Learning Entangled Interactions of Complex Causality via Self-Paced Contrastive Learning;ACM Transactions on Knowledge Discovery from Data;2023-12-09
3. Natural Language Processing for Author Style Detection;2023 IEEE 23rd International Symposium on Computational Intelligence and Informatics (CINTI);2023-11-20