Abstract
In this study, we analyze “Discrimination”, ”Bias”, “Fairness”, and “Trustworthiness” as working variables in the context of the social impact of AI. It has been identified that there exists a set of specialized variables, such as security, privacy, responsibility, etc., that are used to operationalize the principles in the Principled AI International Framework. These variables are defined in such a way that they contribute to others of more general scope, for example, the ones studied in this study, in what appears to be a generalization–specialization relationship. Our aim in this study is to comprehend how we can use available notions of bias, discrimination, fairness, and other related variables that will be assured during the software project’s lifecycle (security, privacy, responsibility, etc.) when developing trustworthy algorithmic decision-making systems (ADMS). Bias, discrimination, and fairness are mainly approached with an operational interest by the Principled AI International Framework, so we included sources from outside the framework to complement (from a conceptual standpoint) their study and their relationship with each other.
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference51 articles.
1. Analysis of the principled-AI framework’s constraints in becoming a methodological reference for trustworthy-AI design;Varona,2022
2. Principled AI Engineering Challenges Towards Trust-worthy AI;Varona;Ethics Inf. Technol.
3. Machine learning’s limitations in avoiding automation of bias
4. Trust me!: How to use trust-by-design to build resilient tech in times of crisis;Gagnon;WJCOMPI,2020
5. Toward trustworthy AI development: Mechanisms for supporting verifiable claims;Brundage;arXiv,2020
Cited by
36 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献