Beyond Lexical Boundaries: LLM-Generated Text Detection for Romanian Digital Libraries
-
Published:2024-01-25
Issue:2
Volume:16
Page:41
-
ISSN:1999-5903
-
Container-title:Future Internet
-
language:en
-
Short-container-title:Future Internet
Author:
Nitu Melania1ORCID, Dascalu Mihai12ORCID
Affiliation:
1. Faculty of Automatic Control and Computers, National University of Science and Technology Politehnica Bucharest, 313 Splaiul Independentei, 060042 Bucharest, Romania 2. Academy of Romanian Scientists, Str. Ilfov, Nr.3, 050044 Bucharest, Romania
Abstract
Machine-generated content reshapes the landscape of digital information; hence, ensuring the authenticity of texts within digital libraries has become a paramount concern. This work introduces a corpus of approximately 60 k Romanian documents, including human-written samples as well as generated texts using six distinct Large Language Models (LLMs) and three different generation methods. Our robust experimental dataset covers five domains, namely books, news, legal, medical, and scientific publications. The exploratory text analysis revealed differences between human-authored and artificially generated texts, exposing the intricacies of lexical diversity and textual complexity. Since Romanian is a less-resourced language requiring dedicated detectors on which out-of-the-box solutions do not work, this paper introduces two techniques for discerning machine-generated texts. The first method leverages a Transformer-based model to categorize texts as human or machine-generated, while the second method extracts and examines linguistic features, such as identifying the top textual complexity indices via Kruskal–Wallis mean rank and computes burstiness, which are further fed into a machine-learning model leveraging an extreme gradient-boosting decision tree. The methods show competitive performance, with the first technique’s results outperforming the second one in two out of five domains, reaching an F1 score of 0.96. Our study also includes a text similarity analysis between human-authored and artificially generated texts, coupled with a SHAP analysis to understand which linguistic features contribute more to the classifier’s decision.
Reference63 articles.
1. Weidinger, L., Mellor, J.F., Rauh, M., Griffin, C., Uesato, J., Huang, P., Cheng, M., Glaese, M., Balle, B., and Kasirzadeh, A. (2021). Ethical and social risks of harm from Language Models. arXiv. 2. Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., Radford, A., Krueger, G., Kim, J.W., and Kreps, S. (2019). Release Strategies and the Social Impacts of Language Models. arXiv. 3. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017). Advances in Neural Information Processing Systems, Curran Associates. 4. Rush, A.M., Chopra, S., and Weston, J. (2015, January 17–21). A Neural Attention Model for Abstractive Sentence Summarization. Proceedings of the Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal. 5. Serban, I., Sordoni, A., Bengio, Y., Courville, A.C., and Pineau, J. (2015, January 25–30). Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA.
|
|