Affiliation:
1. City University of Hong Kong Hong Kong China
2. Nan Jing University Nan Jing China
Abstract
AbstractFrom a 2D video of a person in action, human mesh recovery aims to infer the 3D human pose and shape frame by frame. Despite progress on video‐based human pose and shape estimation, it is still challenging to guarantee high accuracy and smoothness simultaneously. To tackle this problem, we propose a Video2mesh, a temporal convolutional transformer (TConvTransformer) based temporal network which is able to recover accurate and smooth human mesh from 2D video. The temporal convolution block achieves the sequence‐level smoothness by aggregating image features from adjacent frames. The subsequent multi‐attention transformer improves the accuracy due to its multi‐subspace for better middle‐frame feature representation. Meanwhile, we add a TConvTransformer discriminator which is trained together with our 3D human mesh temporal encoder. This TConvTransformer discriminator further improves the accuracy and smoothness by restricting the pose and shape in a more reliable space based on the AMASS dataset. We conduct extensive experiments on three standard benchmark datasets and show that our proposed Video2mesh outperforms other state‐of‐the‐art methods in both accuracy and smoothness.
Funder
City University of Hong Kong
Publisher
Institution of Engineering and Technology (IET)
Subject
Computer Vision and Pattern Recognition,Software