Author:
Bachmann Dominik,van der Wal Oskar,Chvojka Edita,Zuidema Willem H.,van Maanen Leendert,Schulz Katrin
Abstract
AbstractTo prevent ordinary people from being harmed by natural language processing (NLP) technology, finding ways to measure the extent to which a language model is biased (e.g., regarding gender) has become an active area of research. One popular class of NLP bias measures are bias benchmark datasets—collections of test items that are meant to assess a language model’s preference for stereotypical versus non-stereotypical language. In this paper, we argue that such bias benchmarks should be assessed with models from the psychometric framework of item response theory (IRT). Specifically, we tie an introduction to basic IRT concepts and models with a discussion of how they could be relevant to the evaluation, interpretation and improvement of bias benchmark datasets. Regarding evaluation, IRT provides us with methodological tools for assessing the quality of both individual test items (e.g., the extent to which an item can differentiate highly biased from less biased language models) as well as benchmarks as a whole (e.g., the extent to which the benchmark allows us to assess not only severe but also subtle levels of model bias). Through such diagnostic tools, the quality of benchmark datasets could be improved, for example by deleting or reworking poorly performing items. Finally, in regards to interpretation, we argue that IRT models’ estimates for language model bias are conceptually superior to traditional accuracy-based evaluation metrics, as the former take into account more information than just whether or not a language model provided a biased response.
Funder
Nederlandse Organisatie voor Wetenschappelijk Onderzoek
Publisher
Springer Science and Business Media LLC
Reference59 articles.
1. Akour, M., & Al-Omari, H. (2013). Empirical investigation of the stability of IRT item-parameters estimation. International Online Journal of Educational Sciences, 5(2), 291–301.
2. Amidei, J., Piwek, P., & Willis, A. (2020). Identifying annotator bias: a new IRT-based method for bias identification. In Proceedings of the 28th international conference on computational linguistics (pp. 4787–4797). https://aclanthology.org/2020.coling-main.421/
3. Anunciacao, L. (2018). An overview of the history and methodological aspects of psychometrics: History and methodological aspects of psychometrics. Journal for ReAttach Therapy and Developmental Diversities, 1(1), 44–58.
4. Beeching, E., Fourrier, C., Habib, N., Han, S., Lambert, N., Rajani, N., Sanseviero, O., Tunstall, L., & Wolf, T. (2023). Open LLM leaderboard. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
5. Blodgett, S. L., Barocas, S., Daumé III, H., & Wallach, H. (2020). Language (technology) is power: a critical survey of “bias” in NLP. In Proceedings of the 58th annual meeting of the association for computational linguistics (pp. 5454–5476). https://doi.org/10.18653/v1/2020.acl-main.485