Affiliation:
1. Department of Digital Media Technology Hangzhou Dianzi University Hangzhou China
Abstract
AbstractGraph Convolutional Networks (GCNs) have been widely used in skeleton‐based action recognition. Though significant performance has been achieved, it is still challenging to effectively model the complex dynamics of skeleton sequences. A novel position‐aware spatio‐temporal GCN for skeleton‐based action recognition is proposed, where the positional encoding is investigated to enhance the capacity of typical baselines for comprehending the dynamic characteristics of action sequence. Specifically, the authors’ method systematically investigates the temporal position encoding and spatial position embedding, in favour of explicitly capturing the sequence ordering information and the identity information of nodes that are used in graphs. Additionally, to alleviate the redundancy and over‐smoothing problems of typical GCNs, the authors’ method further investigates a subgraph mask, which gears to mine the prominent subgraph patterns over the underlying graph, letting the model be robust against the impaction of some irrelevant joints. Extensive experiments on three large‐scale datasets demonstrate that our model can achieve competitive results comparing to the previous state‐of‐art methods.
Publisher
Institution of Engineering and Technology (IET)
Subject
Computer Vision and Pattern Recognition,Software