Affiliation:
1. Massachusetts Institute of Technology
2. Massachusetts Institute of Technology and Advanced Technology Labs, Adobe Systems Incorporated and University of Washington
Abstract
Controllers are necessary for physically-based synthesis of character animation. However, creating controllers requires either manual tuning or expensive computer optimization. We introduce linear Bellman combination as a method for reusing existing controllers. Given a set of controllers for related tasks, this combination creates a controller that performs a new task. It naturally weights the contribution of each component controller by its relevance to the current state and goal of the system. We demonstrate that linear Bellman combination outperforms naive combination often succeeding where naive combination fails. Furthermore, this combination is provably optimal for a new task if the component controllers are also optimal for related tasks. We demonstrate the applicability of linear Bellman combination to interactive character control of stepping motions and acrobatic maneuvers.
Funder
Division of Computing and Communication Foundations
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design
Reference45 articles.
1. Apprenticeship learning via inverse reinforcement learning
2. Nonparametric representation of policies and value functions: A trajectory-based approach;Atkeson C. G.;Advances in Neural Information Processing Systems (NIPS),2002
3. Using local trajectory optimizers to speed up global optimization in dynamic programming;Atkeson C. G.;Advances in Neural Information Processing Systems (NIPS),1994
4. Real-time control of physically based simulations using gentle forces
Cited by
38 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献