Affiliation:
1. Computer Science Department Oberlin College Oberlin Ohio USA
2. School of Computing University of Nebraska–Lincoln Lincoln Nebraska USA
3. School of Computing University of Georgia Athens Georgia USA
Abstract
AbstractIn many real‐world applications of AI, the set of actors and tasks are not constant, but instead change over time. Robots tasked with suppressing wildfires eventually run out of limited suppressant resources and need to temporarily disengage from the collaborative work in order to recharge, or they might become damaged and leave the environment permanently. In a large business organization, objectives and goals change with the market, requiring workers to adapt to perform different sets of tasks across time. We call these multiagent systems (MAS) open agent systems (OASYS), and the openness of the sets of agents and tasks necessitates new capabilities and modeling for decision making compared to planning and learning in closed environments. In this article, we discuss three notions of openness: agent openness, task openness, and type openness. We also review the past and current research on addressing the novel challenges brought about by openness in OASYS. We share lessons learned from these efforts and suggest directions for promising future work in this area. We also encourage the community to engage and participate in this area of MAS research to address critical real‐world problems in the application of AI to enhance our daily lives.
Funder
Division of Information and Intelligent Systems
Reference76 articles.
1. Abraham S. Z.Carmichael S.Banerjee R.VidalMata A.Agrawal M. A.Islam W.Scheirer andJ.Cleland‐Huang.2021. “Adaptive Autonomy in Human‐on‐the‐Loop Vision‐Based Robotics Systems.” In2021 IEEE/ACM 1st Workshop on AI Engineering—Software Engineering for AI (WAIN) 113–120.Los Alamitos CA USA:IEEE Computer Society.
2. Point Based Solution Method for Communicative IPOMDPs
3. Incorporating machine learning and a risk-based strategy in an anti-money laundering multiagent system
4. Ammar H. B. E.Eaton P.Ruvolo andM.Taylor.2014. “Online Multi‐Task Learning for Policy Gradient Methods.” InProceedings of the 31st International Conference on Machine Learning Proceedings of Machine Learning Research edited byE. P.Xing andT.Jebara volume32 1206–1214.PMLR.
5. Andreas J. D.Klein andS.Levine.2017. “Modular Multitask Reinforcement Learning with Policy Sketches.” InProceedings of the 34th International Conference on Machine Learning ICML'17 volume70 166–175.JMLR.org.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献