Cross-Domain Fake News Detection Using a Prompt-Based Approach
-
Published:2024-08-08
Issue:8
Volume:16
Page:286
-
ISSN:1999-5903
-
Container-title:Future Internet
-
language:en
-
Short-container-title:Future Internet
Author:
Alghamdi Jawaher12, Lin Yuqing13ORCID, Luo Suhuai1
Affiliation:
1. School of Information and Physical Sciences, College of Engineering Science and Environment, University of Newcastle, Newcastle 2308, Australia 2. Department of Computer Science, King Khalid University, Abha 62521, Saudi Arabia 3. School of Sciences, Jimei University, Xiamen 361021, China
Abstract
The proliferation of fake news poses a significant challenge in today’s information landscape, spanning diverse domains and topics and undermining traditional detection methods confined to specific domains. In response, there is a growing interest in strategies for detecting cross-domain misinformation. However, traditional machine learning (ML) approaches often struggle with the nuanced contextual understanding required for accurate news classification. To address these challenges, we propose a novel contextualized cross-domain prompt-based zero-shot approach utilizing a pre-trained Generative Pre-trained Transformer (GPT) model for fake news detection (FND). In contrast to conventional fine-tuning methods reliant on extensive labeled datasets, our approach places particular emphasis on refining prompt integration and classification logic within the model’s framework. This refinement enhances the model’s ability to accurately classify fake news across diverse domains. Additionally, the adaptability of our approach allows for customization across diverse tasks by modifying prompt placeholders. Our research significantly advances zero-shot learning by demonstrating the efficacy of prompt-based methodologies in text classification, particularly in scenarios with limited training data. Through extensive experimentation, we illustrate that our method effectively captures domain-specific features and generalizes well to other domains, surpassing existing models in terms of performance. These findings contribute significantly to the ongoing efforts to combat fake news dissemination, particularly in environments with severely limited training data, such as online platforms.
Reference27 articles.
1. News in an online world: The need for an “automatic crap detector”;Chen;Proc. Assoc. Inf. Sci. Technol.,2015 2. Language models are few-shot learners;Brown;Adv. Neural Inf. Process. Syst.,2020 3. Gu, Y., Han, X., Liu, Z., and Huang, M. (2021). Ppt: Pre-trained prompt tuning for few-shot learning. arXiv. 4. Loem, M., Kaneko, M., Takase, S., and Okazaki, N. (2023). Exploring Effectiveness of GPT-3 in Grammatical Error Correction: A Study on Performance and Controllability in Prompt-Based Methods. arXiv. 5. Hu, Y., Chen, Q., Du, J., Peng, X., Keloth, V.K., Zuo, X., Zhou, Y., Li, Z., Jiang, X., and Lu, Z. (2024). Improving large language models for clinical named entity recognition via prompt engineering. J. Am. Med. Inform. Assoc., ocad259.
|
|