Author:
Rush Eugene R.,Heckman Christoffer,Jayaram Kaushik,Humbert J. Sean
Abstract
Legged robot control has improved in recent years with the rise of deep reinforcement learning, however, much of the underlying neural mechanisms remain difficult to interpret. Our aim is to leverage bio-inspired methods from computational neuroscience to better understand the neural activity of robust robot locomotion controllers. Similar to past work, we observe that terrain-based curriculum learning improves agent stability. We study the biomechanical responses and neural activity within our neural network controller by simultaneously pairing physical disturbances with targeted neural ablations. We identify an agile hip reflex that enables the robot to regain its balance and recover from lateral perturbations. Model gradients are employed to quantify the relative degree that various sensory feedback channels drive this reflexive behavior. We also find recurrent dynamics are implicated in robust behavior, and utilize sampling-based ablation methods to identify these key neurons. Our framework combines model-based and sampling-based methods for drawing causal relationships between neural network activity and robust embodied robot behavior.
Reference59 articles.
1. The geometry of integration in text classification RNNs;Aitken,2022
2. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation;Bach;PLOS ONE,2015
3. Explaining reinforcement learning with shapley values;Beechey,2023
4. Barkour: benchmarking animal-level agility with quadruped robots;Caluwaerts,2023
5. Crossing the cleft: communication challenges between neuroscience and artificial intelligence;Chance;Front. Comput. Neurosci.,2020