Abstract
Can a large language model produce humor? Past research has focused on anecdotal examples of large language models succeeding or failing at producing humor. These examples, while interesting, do not examine ChatGPT’s humor production abilities in ways comparable to humans’ abilities, nor do they shed light on how funny ChatGPT is to the general public. To provide a systematic test, we asked ChatGPT 3.5 and laypeople to respond to the same humor prompts (Study 1). We also asked ChatGPT 3.5 to generate humorous satirical headlines in the style of The Onion and compared them to published headlines of the satirical magazine, written by professional comedy writers (Study 2). In both studies, human participants rated the funniness of the human and A.I.-produced responses without being aware of their source. ChatGPT 3.5-produced jokes were rated as equally funny or funnier than human-produced jokes regardless of the comedic task and the expertise of the human comedy writer.
Funder
USC Dornsife Mind and Society Center
Publisher
Public Library of Science (PLoS)
Reference45 articles.
1. The effects of incongruity, surprise and positive moderators on perceived humor in television advertising;D Alden;Journal of Advertising,2000
2. Benign violations: making immoral behavior funny;A McGraw;Psychological Science,2010
3. What Makes Things Funny? An Integrative Review of the Antecedents of Laughter and Amusement;C Warren;Personality and Social Psychology Review,2021
4. ChatGPT continues to be one of the fastest-growing services ever;J. Porter;The Verge [Internet],2023
5. Battle of the Wordsmiths: Comparing ChatGPT, GPT-4, Claude, and Bard;A Borji;Social Science Research Network,2023