Abstract
AbstractHuman and artificial agents are both committed to learning. Evaluation of performance is a key driver. This is the case for adaptive feedback, which is generated from the evaluation of performance outcomes. It is also true for feedforward guidance, which results from real-time monitoring of ongoing processes. Augmented agents will learn in both ways. However, these developments signal a shift from historic patterns of learning from performance, which mainly focus on slower, inter-cyclical feedback cycles. Indeed, much human learning occurs in simple increments and takes years to complete. By comparison, artificial agents learn complex lessons with extraordinary speed and precision. Therefore, if collaborative supervision is poor, artificial learning will be fast and complex, while human learning is relatively sluggish and incremental. Such learning will be distorted, often ambiguous, ambivalent, and potentially dysfunctional. This chapter examines these dilemmas.
Publisher
Springer International Publishing
Reference53 articles.
1. Argote, L., & Guo, J. M. (2016). Routines and transactive memory systems: Creating, coordinating, retaining, and transferring knowledge in organizations. Research in Organizational Behavior, 36, 65–84.
2. Argote, L., McEvily, B., & Reagans, R. (2003). Managing knowledge in organizations: An integrative framework and review of emerging themes. Management Science, 49(4), 571–582.
3. Argyris, C. (2002). Double-loop learning, teaching, and research. Academy of Management Learning & Education, 1(2), 206–218.
4. Bandura, A. (1997). Self-efficacy: The exercise of control. W.H.Freeman and Company.
5. Bandura, A. (2007). Reflections on an agentic theory of human behavior. Tidsskrift-Norsk Psykologforening, 44(8), 995.