Abstract
AbstractPerhaps the most widely touted of GPT-4’s at-launch, zero-shot capabilities has been its reported 90th-percentile performance on the Uniform Bar Exam. This paper begins by investigating the methodological challenges in documenting and verifying the 90th-percentile claim, presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4’s UBE percentile are overinflated. First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population. Second, data from a recent July administration of the same exam suggests GPT-4’s overall UBE percentile was below the 69th percentile, and $$\sim$$
∼
48th percentile on essays. Third, examining official NCBE data and using several conservative statistical assumptions, GPT-4’s performance against first-time test takers is estimated to be $$\sim$$
∼
62nd percentile, including $$\sim$$
∼
42nd percentile on essays. Fourth, when examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to $$\sim$$
∼
48th percentile overall, and $$\sim$$
∼
15th percentile on essays. In addition to investigating the validity of the percentile claim, the paper also investigates the validity of GPT-4’s reported scaled UBE score of 298. The paper successfully replicates the MBE score, but highlights several methodological issues in the grading of the MPT + MEE components of the exam, which call into question the validity of the reported essay score. Finally, the paper investigates the effect of different hyperparameter combinations on GPT-4’s MBE performance, finding no significant effect of adjusting temperature settings, and a significant effect of few-shot chain-of-thought prompting over basic zero-shot prompting. Taken together, these findings carry timely insights for the desirability and feasibility of outsourcing legally relevant tasks to AI models, as well as for the importance for AI developers to implement rigorous and transparent capabilities evaluations to help secure safe and trustworthy AI.
Funder
Massachusetts Institute of Technology
Publisher
Springer Science and Business Media LLC
Reference78 articles.
1. Albanese MA (2014) The testing column: scaling: it’s not just for fish or mountains. Bar Exam 83(4):50–56
2. Bates D, Mächler M, Bolker B, Walker S (2014) Fitting linear mixed-effects models using LME4. arXiv preprint arXiv:1406.5823
3. Blair-Stanek A, Carstens A-M, Goldberg DS, Graber M, Gray DC, Stearns ML (2023) Gpt-4’s law school grades, Partnership tax b, property b-, tax b. Crim C-, Law & Econ C, Partnership Tax B, Property B-, Tax B
4. Bommarito MJ II, Katz DM (2017) Measuring and modeling the us regulatory ecosystem. J Stat Phys 168:1125–1135
5. Bostrom N, Yudkowsky E (2018) The ethics of artificial intelligence. Artificial intelligence safety and security. Chapman and Hall/CRC, New York, pp 57–69
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献