Tweet sentiment quantification: An experimental re-evaluation

Author:

Moreo AlejandroORCID,Sebastiani FabrizioORCID

Abstract

Sentiment quantification is the task of training, by means of supervised learning, estimators of the relative frequency (also called “prevalence”) of sentiment-related classes (such as Positive, Neutral, Negative) in a sample of unlabelled texts. This task is especially important when these texts are tweets, since the final goal of most sentiment classification efforts carried out on Twitter data is actually quantification (and not the classification of individual tweets). It is well-known that solving quantification by means of “classify and count” (i.e., by classifying all unlabelled items by means of a standard classifier and counting the items that have been assigned to a given class) is less than optimal in terms of accuracy, and that more accurate quantification methods exist. Gao and Sebastiani 2016 carried out a systematic comparison of quantification methods on the task of tweet sentiment quantification. In hindsight, we observe that the experimentation carried out in that work was weak, and that the reliability of the conclusions that were drawn from the results is thus questionable. We here re-evaluate those quantification methods (plus a few more modern ones) on exactly the same datasets, this time following a now consolidated and robust experimental protocol (which also involves simulating the presence, in the test data, of class prevalence values very different from those of the training set). This experimental protocol (even without counting the newly added methods) involves a number of experiments 5,775 times larger than that of the original study. Due to the above-mentioned presence, in the test data, of samples characterised by class prevalence values very different from those of the training set, the results of our experiments are dramatically different from those obtained by Gao and Sebastiani, and provide a different, much more solid understanding of the relative strengths and weaknesses of different sentiment quantification methods.

Funder

Horizon 2020 Framework Programme

Publisher

Public Library of Science (PLoS)

Subject

Multidisciplinary

Reference39 articles.

1. A review on quantification learning;P González;ACM Computing Surveys,2017

2. A method of automated nonparametric content analysis for social science;DJ Hopkins;American Journal of Political Science,2010

3. Verbal autopsy methods with multiple causes of death;G King;Statistical Science,2008

4. Machines that learn how to code open-ended survey data;A Esuli;International Journal of Market Research,2010

Cited by 14 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. MC-SQ and MC-MQ: Ensembles for Multi-Class Quantification;IEEE Transactions on Knowledge and Data Engineering;2024-08

2. Binary quantification and dataset shift: an experimental investigation;Data Mining and Knowledge Discovery;2024-03-18

3. Quantification Over Time;Lecture Notes in Computer Science;2024

4. Multi-Label Quantification;ACM Transactions on Knowledge Discovery from Data;2023-08-10

5. The Road Ahead;The Information Retrieval Series;2023

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3