Abstract
AbstractSelecting suitable grasps on three-dimensional objects is a challenging visuomotor computation, which involves combining information about an object (e.g., its shape, size, and mass) with information about the actor’s body (e.g., the optimal grasp aperture and hand posture for comfortable manipulation). Here we used functional magnetic resonance imaging to investigate brain networks associated with these distinct aspects during grasp planning and execution. Human participants viewed and then executed preselected grasps on L-shaped objects made of wood and/or brass. By leveraging a computational approach that accurately predicts human grasp locations, we selected grasp points that disentangled the role of multiple grasp-relevant factors: grasp axis, grasp size, and object mass. Representational Similarity Analysis revealed that grasp axis was encoded along dorsal-stream regions during grasp planning. Grasp size was first encoded in ventral-stream areas during grasp planning, then in premotor regions during grasp execution. Object mass was encoded in ventral-stream and (pre)motor regions only during grasp execution. Premotor regions further encoded visual predictions of grasp comfort, whereas the ventral stream encoded grasp comfort during execution, suggesting its involvement in haptic evaluation. These shifts in neural representations thus capture the sensorimotor transformations that allow humans to grasp objects.Significance StatementGrasping requires integrating object properties with constraints on hand and arm postures. Using a computational approach that accurately predicts human grasp locations by combining such constraints, we selected grasps on objects that disentangled the relative contributions of object mass, grasp size, and grasp axis during grasp planning and execution in a neuroimaging study. Our findings reveal a greater role of dorsal-stream visuomotor areas during grasp planning, and surprisingly, increasing ventral stream engagement during execution. We propose that during planning, visuomotor representations initially encode grasp axis and size. Perceptual representations of object material properties become more relevant instead as the hand approaches the object and motor programs are refined with estimates of the grip forces required to successfully lift the object.
Publisher
Cold Spring Harbor Laboratory