Abstract
Nowadays, due to the great accessibility to the internet, people seek out and consume news via social media due to its low cost, ease of access, and quick transmission of information. The tremendous leverage of social media applications in daily life makes them significant information sources. Users can post and share different types of information in all their forms with a single click. However, the cost becomes expensive and dangerous when non-experts say anything about anything. Fake news are rapidly dominating the dissemination of disinformation by distorting people’s views or knowledge to influence their awareness and decision-making. Therefore, we have to identify and prevent the problematic effects of falsified information as soon as possible. In this paper, we conducted three experiments with machine learning classifiers, deep learning models, and transformers. In all experiments, we relied on word embedding to extract contextual features from articles. Our experimental results showed that deep learning models outperformed machine learning classifiers and the BERT transformer in terms of accuracy. Moreover, results showed almost the same accuracy between the LSTM and GRU models. We showed that by combining an augmented linguistic feature set with machine or deep learning models, we can, with high accuracy, identify fake news.
Funder
National Research Foundation of Korea
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Cited by
24 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献