The performance of large language models on quantitative and verbal ability tests: Initial evidence and implications for unproctored high‐stakes testing

Author:

Hickman Louis1,Dunlop Patrick D.2ORCID,Wolf Jasper Leo3

Affiliation:

1. Department of Psychology, Virginia Tech The Wharton School of the University of Pennsylvania Blacksburg Virginia USA

2. Future of Work Institute, Faculty of Business and Law Curtin University Perth Western Australia Australia

3. Arctic Shores London UK

Abstract

AbstractUnproctored assessments are widely used in pre‐employment assessment. However, widely accessible large language models (LLMs) pose challenges for unproctored personnel assessments, given that applicants may use them to artificially inflate their scores beyond their true abilities. This may be particularly concerning in cognitive ability tests, which are widely used and traditionally considered to be less fakeable by humans than personality tests. Thus, this study compares the performance of LLMs on two common types of cognitive tests: quantitative ability (number series completion) and verbal ability (use a passage of text to determine whether a statement is true). The tests investigated are used in real‐world, high‐stakes selection. We also examine the performance of the LLMs across different test formats (i.e., open‐ended vs. multiple choice). Further, we contrast the performance of two LLMs (Generative Pretrained Transformers, GPT‐3.5 and GPT‐4) across multiple prompt approaches and “temperature” settings (i.e., a parameter that determines the amount of randomness in the model's output). We found that the LLMs performed well on the verbal ability test but extremely poorly on the quantitative ability test, even when accounting for the test format. GPT‐4 outperformed GPT‐3.5 across both types of tests. Notably, although prompt approaches and temperature settings did affect LLM test performance, those effects were mostly minor relative to differences across tests and language models. We provide recommendations for securing pre‐employment testing against LLM influences. Additionally, we call for rigorous research investigating the prevalence of LLM usage in pre‐employment testing as well as on how LLM usage affects selection test validity.

Publisher

Wiley

Reference56 articles.

1. Acar O. A.(2023).AI prompt engineering isn't the future.Harvard Business Review. Available athttps://hbr.org/2023/06/ai-prompt-engineering-isnt-the-future

2. Performance of ChatGPT, GPT-4, and Google Bard on a Neurosurgery Oral Boards Preparation Question Bank

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3