Affiliation:
1. Department of Political Science, University of Zurich, Zurich 8050, Switzerland
Abstract
Many NLP applications require manual text annotations for a variety of tasks, notably to train classifiers or evaluate the performance of unsupervised models. Depending on the size and degree of complexity, the tasks may be conducted by crowd workers on platforms such as MTurk as well as trained annotators, such as research assistants. Using four samples of tweets and news articles (
n
= 6,183), we show that ChatGPT outperforms crowd workers for several annotation tasks, including relevance, stance, topics, and frame detection. Across the four datasets, the zero-shot accuracy of ChatGPT exceeds that of crowd workers by about 25 percentage points on average, while ChatGPT’s intercoder agreement exceeds that of both crowd workers and trained annotators for all tasks. Moreover, the per-annotation cost of ChatGPT is less than $0.003—about thirty times cheaper than MTurk. These results demonstrate the potential of large language models to drastically increase the efficiency of text classification.
Funder
EC | European Research Council
Publisher
Proceedings of the National Academy of Sciences
Reference15 articles.
1. G. Emerson , Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022) (Association for Computational Linguistics, Seattle, 2022).
2. Crowd-sourced Text Analysis: Reproducible and Agile Production of Political Data
3. An MTurk Crisis? Shifts in Data Quality and the Impact on Study Results
4. P. Y. Wu J. A. Tucker J. Nagler S. Messing Large Language Models Can Be Used to Estimate the Ideologies of Politicians in a Zero-Shot Learning Setting (2023).
5. J. J. Nay Large Language Models as Corporate Lobbyists (2023).
Cited by
185 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献