Abstract
This study investigates the performance of generative artificial intelligence (AI) in evaluating the acceptance of generative AI technologies within higher education guidelines, reflecting on the implications for educational policy and practice. Drawing on a dataset of guidelines from top-ranked universities, we compared generative AI evaluations with human evaluations, focusing on acceptance, performance expectancy, facilitating conditions, and perceived risk. Our study revealed a strong positive correlation between ChatGPT-rated and human-rated acceptance of generative AI, suggesting that generative AI can accurately reflect human judgment in this context. Further, we found positive associations between ChatGPT-rated acceptance and performance expectancy and facilitating conditions, while a negative correlation with perceived risk. These results validate generative AI evaluation, which also extends the application of the Technology Acceptance Model and the Unified Theory of Acceptance and Use of Technology framework from individual to institutional perspectives.