Abstract
ABSTRACTIn order to grasp an object successfully, we must select appropriate contact regions for our hands on the surface of the object. However, identifying such regions is challenging. Here, we describe a workflow to estimate contact regions from marker-based tracking data. Participants grasp real objects, while we track the 3D position of both the objects and the hand including the fingers’ joints. We first determine joint Euler angles from a selection of tracked markers positioned on the back of the hand. Then, we use state-of-the-art hand mesh reconstruction algorithms to generate a mesh model of the participant’s hand in the current pose and 3D position. Using objects that were either 3D printed, or 3D scanned—and are thus available as both real objects and mesh data—allows us to co-register the hand and object meshes. In turn, this allows us to estimate approximate contact regions by calculating intersections between the hand mesh and the co-registered 3D object mesh. The method may be used to estimate where and how humans grasp objects under a variety of conditions. Therefore, the method could be of interest to researchers studying visual and haptic perception, motor control, human-computer interaction in virtual and augmented reality, and robotics.SUMMARYWhen we grasp an object, multiple regions of the fingers and hand typically make contact with the object’s surface. Reconstructing such contact regions is challenging. Here, we present a method for approximately estimating contact regions, by combining marker-based motion capture with existing deep learning-based hand mesh reconstruction.
Publisher
Cold Spring Harbor Laboratory