Building datasets to support information extraction and structure parsing from electronic theses and dissertations
-
Published:2024-05-03
Issue:2
Volume:25
Page:175-196
-
ISSN:1432-5012
-
Container-title:International Journal on Digital Libraries
-
language:en
-
Short-container-title:Int J Digit Libr
Author:
Ingram William A.ORCID, Wu Jian, Kahu Sampanna Yashwant, Manzoor Javaid Akbar, Banerjee Bipasha, Ahuja Aman, Choudhury Muntabir Hasan, Salsabil Lamia, Shields Winston, Fox Edward A.
Abstract
AbstractDespite the millions of electronic theses and dissertations (ETDs) publicly available online, digital library services for ETDs have not evolved past simple search and browse at the metadata level. We need better digital library services that allow users to discover and explore the content buried in these long documents. Recent advances in machine learning have shown promising results for decomposing documents into their constituent parts, but these models and techniques require data for training and evaluation. In this article, we present high-quality datasets to train, evaluate, and compare machine learning methods in tasks that are specifically suited to identify and extract key elements of ETD documents. We explain how we construct the datasets by manual labeling the data or by deriving labeled data through synthetic processes. We demonstrate how our datasets can be used to develop downstream applications and to evaluate, retrain, or fine-tune pre-trained machine learning models. We describe our ongoing work to compile benchmark datasets and exploit machine learning techniques to build intelligent digital libraries for ETDs.
Funder
Institute of Museum and Library Services
Publisher
Springer Science and Business Media LLC
Reference85 articles.
1. Artifex: PyMuPDF (2016). https://pymupdf.readthedocs.io/ 2. Barthelmé, S., Trukenbrod, H., Engbert, R., et al.: Modelling fixation locations using spatial point processes. J. Vis. 13(12), 1 (2013). https://doi.org/10.1167/13.12.1 3. Beltagy, I., Lo, K., Cohan, A.: SciBERT: a pretrained language model for scientific text. In: Inui, K., Jiang, J., Ng, V. et al. (eds.) Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3–7, 2019 pp 3613–3618. Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1371 4. Belval, E.: pdf2image (2017). https://pypi.org/project/pdf2image/ 5. Bengio, Y., Ducharme, R., Vincent, P.: A neural probabilistic language model. In: Leen, T.K., Dietterich, T.G., Tresp, V.: (eds.) Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS) 2000, Denver, CO, USA, pp. 932–938. MIT Press (2000). https://proceedings.neurips.cc/paper/2000/hash/728f206c2a01bf572b5940d7d9a8fa4c-Abstract.html
|
|