Affiliation:
1. Department of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Pakistan
2. Institute of CS and IT, The Women University Multan, Pakistan
3. School of Computer Science, University College Dublin, Ireland
4. University of North Texas, USA
5. Department of Information and Communication Engineering, Yeungnam University, Korea
Abstract
ChatGPT OpenAI, a large-language chatbot model, has gained a lot of attention due to its popularity and impressive performance in many natural language processing tasks. ChatGPT produces superior answers to a wide range of real-world human questions and generates human-like text. The new OpenAI ChatGPT technology may have some strengths and weaknesses at this early stage. Users have reported early opinions about the ChatGPT features, and their feedback is essential to recognize and fix its shortcomings and issues. This study uses the ChatGPT tweets Arabic dataset to automatically find user opinions and sentiments about ChatGPT technology. The dataset is preprocessed and labeled using the TextBlob Arabic Python library into positive, negative, and neutral tweets. Despite extensive works for the English language, languages like Arabic are less studied regarding tweet analysis. Existing literature about Arabic tweet sentiment analysis has mainly focused on machine learning and deep learning models. We collected a total of 27,780 unstructured tweets from Twitter using the Tweepy SNscrape Python library using various hash-tags such as # Chat-GPT, #OpenAI, #Chatbot, Chat-GPT3, and so on. To enhance the model’s performance and reduce computational complexity, unstructured tweets are converted into structured and normalized forms. Tweets contain missing values, URL and HTML tags, stop words, punctuation, diacritics, elongations, and numeric values that have no impact on the model performance; hence, these increase the computational cost. So, these steps are removed with the help of Python preprocessing libraries to enhance text quality and consistency. This study adopts Transformer-based models such as RoBERTa, XLNet, and DistilBERT that automatically classify the tweets. Additionally, a hybrid transformer-based model is proposed to obtain better results. The proposed hybrid model is developed by combining the hidden outputs of the RoBERTA and BERT models using a concatenation layer, then adding dense layers with “Relu” activation employed as a hidden layer to create non-linearity and a “softmax” activation function for multiclass classification. They differ from existing state-of-the-art models due to the enhanced capabilities of both models in text classification. Hybrid models combine the different models to make accurate predictions and reduce bias and enhanced the overall results, while state-of-the-art models are incapable of making accurate predictions. Experiments show that the proposed hybrid model achieves 96.02% accuracy, 100% precision on negative tweets, and 99% recall for neutral tweets. The performance of the proposed model is far better than existing state-of-the-art models.
Publisher
Association for Computing Machinery (ACM)
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献