Affiliation:
1. University of California Berkeley, Interdisciplinary Studies
Abstract
Artificial Intelligence (AI) systems are increasingly used by the US federal government to replace or support decision making. AI is a computer-based system trained to recognize patterns in data and to apply these patterns to form predictions about new data for a specific task. AI is often viewed as a neutral technological tool, bringing efficiency, objectivity and accuracy to administrative functions, citizen access to services, and regulatory enforcement. However, AI can also encode and amplify the biases of society. Choices on design, implementation, and use can embed existing racial inequalities into AI, leading to a racially biased AI system producing inaccurate predictions or to harmful consequences for racial groups. Racially discriminatory AI systems have already affected public systems such as criminal justice, healthcare, financial systems and housing. This memo addresses the primary causes for the development, deployment and use of racially biased AI systems and suggests three responses to ensure that federal agencies realize the benefits of AI and protect against racially disparate impact. There are three actions that federal agencies must take to prevent racial bias: 1) increase racial diversity in AI designers, 2) implement AI impact assessment, 3) establish procedures for staff to contest automated decisions. Each proposal addresses a different stage in the lifecycle of AI used by federal agencies and helps align US policy with the Organization for Economic Co-operation and Development (OECD) Principles on Artificial Intelligence.
Publisher
Journal of Science Policy and Governance, Inc.
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献