Comparing Fine-Tuning and Prompt Engineering for Multi-Class Classification in Hospitality Review Analysis

Author:

Botunac Ive1ORCID,Brkić Bakarić Marija1,Matetić Maja1ORCID

Affiliation:

1. Faculty of Informatics and Digital Technologies, University of Rijeka, Radmile Matejčić 2, 51000 Rijeka, Croatia

Abstract

This study compares the effectiveness of fine-tuning Transformer models, specifically BERT, RoBERTa, DeBERTa, and GPT-2, against using prompt engineering in LLMs like ChatGPT and GPT-4 for multi-class classification of hotel reviews. As the hospitality industry increasingly relies on online customer feedback to improve services and strategize marketing, accurately analyzing this feedback is crucial. Our research employs a multi-task learning framework to simultaneously conduct sentiment analysis and categorize reviews into aspects such as service quality, ambiance, and food. We assess the capabilities of fine-tuned Transformer models and LLMs with prompt engineering in processing and understanding the complex user-generated content prevalent in the hospitality industry. The results show that fine-tuned models, particularly RoBERTa, are more adept at classification tasks due to their deep contextual processing abilities and faster execution times. In contrast, while ChatGPT and GPT-4 excel in sentiment analysis by better capturing the nuances of human emotions, they require more computational power and longer processing times. Our findings support the hypothesis that fine-tuning models can achieve better results and faster execution than using prompt engineering in LLMs for multi-class classification in hospitality reviews. This study suggests that selecting the appropriate NLP model depends on the task’s specific needs, balancing computational efficiency and the depth of sentiment analysis required for actionable insights in hospitality management.

Publisher

MDPI AG

Reference33 articles.

1. Bompotas, A.I., Ilias, A., Kanavos, C., Makris, G., Rompolas, P., and Savvopoulos, A. (2020, January 5–7). A Sentiment-Based Hotel Review Summarization Using Machine Learning Techniques. Proceedings of the 16th IFIP WG 12.5 International Conference, AIAI 2020, Neos Marmaras, Greece.

2. Extensive hotel reviews classification using long short-term memory;Ishaq;J. Ambient Intell. Humaniz. Comput.,2021

3. Wen, Y., Liang, Y., and Zhu, X. (2023). Sentiment analysis of hotel online reviews using the BERT model and ERNIE model—Data from China. PLoS ONE, 18.

4. Rothman, D. (2021). Transformers for Natural Language Processing Build Innovative Deep Neural Network Architectures for NLP with Python, Pytorch, TensorFlow, BERT, RoBERTa, and More, Packt Publishing, Limited.

5. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., and Polosukhin, I. (2017). Attention Is All You Need. arXiv.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3