Affiliation:
1. Cornell University, Ithaca, NY, USA
2. University of the Witwatersrand, Johannesburg, South Africa
3. Brown University, Providence, RI, USA
Abstract
We present a framework for the automatic encoding and repair of high-level tasks. Given a set of skills a robot can perform, our approach first abstracts sensor data into symbols and then automatically encodes the robot’s capabilities in Linear Temporal Logic (LTL). Using this encoding, a user can specify reactive high-level tasks, for which we can automatically synthesize a strategy that executes on the robot, if the task is feasible. If a task is not feasible given the robot’s capabilities, we present two methods, one enumeration-based and one synthesis-based, for automatically suggesting additional skills for the robot or modifications to existing skills that would make the task feasible. We demonstrate our framework on a Baxter robot manipulating blocks on a table, a Baxter robot manipulating plates on a table, and a Kinova arm manipulating vials, with multiple sensor modalities, including raw images.
Subject
Applied Mathematics,Artificial Intelligence,Electrical and Electronic Engineering,Mechanical Engineering,Modeling and Simulation,Software
Reference51 articles.
1. Ahmetoglu A, Seker M, Sayin A, et al. (2020) DeepSym: deep symbol generation and rule learning from unsupervised continuous robot interaction for planning. ArXiv preprint arXiv:2012.02532.
2. Unsupervised Grounding of Plannable First-Order Logic Representation from Images
3. Classical Planning in Deep Latent Space: Bridging the Subsymbolic-Symbolic Boundary