Can ChatGPT Match the Experts? A Feedback Comparison for Serious Game Development
-
Published:2024-06-26
Issue:2
Volume:11
Page:87-106
-
ISSN:2384-8766
-
Container-title:International Journal of Serious Games
-
language:
-
Short-container-title:IJSG
Author:
Tyni Janne,Turunen Aatu,Kahila Juho,Bednarik Roman,Tedre Matti
Abstract
This paper investigates the potential and validity of ChatGPT as a tool to generate meaningful input for the serious game design process. Baseline input was collected from game designers, students and teachers via surveys, individual interviews and group discussions inspired by a description of a simple educational drilling game and its context of use. In these mixed methods experiments, two recent large language models (ChatGPT 3.5 and 4.0) were prompted with the same description to validate findings with expert participants. In addition, the impact on the models’ suggestions from integrating the expert’s role (e.g., "Answer as if you were a teacher.", "game designer", or a "student") into the prompt was investigated. The findings of these comparative analyses show that the input from both human expert participants and large language models can produce overlapping input in some expert groups. However, experts put emphasis on different categories of input and produce unique viewpoints. This research opens the discussion on the trustworthiness of large language model generated input for serious game development.
Publisher
Serious Games Society
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Editorial, Vol. 11, No. 2;International Journal of Serious Games;2024-06-26