Affiliation:
1. Oregon State University
2. University of Massachusetts
3. Microsoft Research
Abstract
AbstractAutonomous agents acting in the real‐world often operate based on models that ignore certain aspects of the environment. The incompleteness of any given model – handcrafted or machine acquired – is inevitable due to practical limitations of any modeling technique for complex real‐world settings. Due to the limited fidelity of its model, an agent's actions may have unexpected, undesirable consequences during execution. Learning to recognize and avoid such negative side effects (NSEs) of an agent's actions is critical to improve the safety and reliability of autonomous systems. Mitigating NSEs is an emerging research topic that is attracting increased attention due to the rapid growth in the deployment of AI systems and their broad societal impacts. This article provides a comprehensive overview of different forms of NSEs and the recent research efforts to address them. We identify key characteristics of NSEs, highlight the challenges in avoiding NSEs, and discuss recently developed approaches, contrasting their benefits and limitations. The article concludes with a discussion of open questions and suggestions for future research directions.
Funder
Semiconductor Research Corporation
Reference50 articles.
1. Barocas S. K.Crawford A.Shapiro&H.Wallach2017. “The problem with bias: Allocative versus representational harms in machine learning.” In Proceedings of the 9th Annual Conference of the Special Interest Group for Computing Information and Society. October 2017. Philadelphia Pennsylvania USA.
2. Artificial intelligence framework for simulating clinical decision-making: A Markov decision process approach
3. Wild patterns: Ten years after the rise of adversarial machine learning
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献