Affiliation:
1. SİVAS BİLİM VE TEKNOLOJİ ÜNİVERSİTESİ
2. OSTİM TEKNİK ÜNİVERSİTESİ
Abstract
Engaging in social media browsing stands out as one of the most prevalent online activities. As social media increasingly integrates into our daily routines, it opens up numerous opportunities for spammers seeking to target individuals through these platforms. Given the concise and sporadic nature of messages exchanged on social networks, they fall within the realm of short text classification challenges. Effectively addressing such issues requires appropriately representing the text to enhance classifier efficiency.Accordingly, this study utilizes robust representations derived from contextualized models as a component of the feature extraction process within our deep neural network model, which is built upon the Bidirectional Long Short-Term Memory neural network (BLSTM). Introducing ALBERT4Spam, the study presents a deep learning methodology aimed at identifying spam on social networking platforms. It harnesses the proven ALBERT model to acquire contextualized word representations, thereby elevating the effectiveness of the suggested neural network framework.The random search method was used to fine-tune the ALBERT4Spam model's hyperparameters, which included the number of BLSTM layers, neuron count, layer count, activation function, weight initializer, learning rate, optimizer, and dropout, in order to obtain optimal performance. The experiments conducted on three benchmark datasets demonstrate that our innovative model surpasses widely used methods in social network spam detection. The precision results stand at 0.98, 0.96, and 0.98 for Twitter, YouTube, and SMS datasets, respectively, showcasing superior performance outcomes.
Publisher
International Journal of Informatics Technologies
Reference33 articles.
1. T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” arXiv preprint arXiv:1301.3781, 2013.
2. J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543.
3. A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov, “Bag of tricks for efficient text classification,” arXiv preprint arXiv:1607.01759, 2016.
4. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
5. Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” arXiv preprint arXiv:1909.11942, 2019.