Abstract
The swift growth of social networks has enabled easy access to a wealth of user‐created information for public assessment. These data hold potential for various uses, including analyzing comments and reviews through text analysis. The study utilizes a specialized version of the Bidirectional Encoder Representations from Transformers (BERT) model known as IndoBERT, tailored explicitly for Bahasa Indonesia. It aims to improve accuracy in the Indonesian natural language understanding benchmark by boosting performance in sentiment analysis and emotion classification tasks. The testing for both tasks involved a hybrid methodology that merged the summations of the four last hidden layers from the IndoBERT model with a combination of bidirectional long short‐term memory (BiLSTM), bidirectional gated recurrent unit (BiGRU), and an attention model. The resulting model’s performance was assessed using the F1‐score metric. Based on the experimental results, the proposed model achieves an accuracy of 93% for sentiment analysis and 78% for emotion classification on the Indonesian natural language understanding (IndoNLU) benchmark dataset. The implementation result shows that the optimal accuracy of the models’ performance evaluation was obtained using different hybrid models.
Funder
Kementerian Pendidikan, Kebudayaan, Riset, dan Teknologi