Affiliation:
1. Department of Sport Science Human Performance Research Centre University of Konstanz Konstanz Germany
2. Subsequent GmbH Konstanz Germany
3. Department of Computer and Information Science University of Konstanz Konstanz Germany
Abstract
AbstractRecently, AI‐driven skeleton reconstruction tools that use multistage computer vision pipelines were designed to estimate 3D kinematics from 2D video sequences. In the present study, we validated a novel markerless, smartphone video‐based artificial intelligence (AI) motion capture system for hip, knee, and ankle angles during countermovement jumps (CMJs). Eleven participants performed six CMJs. We used 2D videos created by a smartphone (Apple iPhone X, 4K, 60 fps) to create 24 different keypoints, which together built a full skeleton including joints and their connections. Body parts and skeletal keypoints were localized by calculating confidence maps using a multilevel convolutional neural network that integrated both spatial and temporal features. We calculated hip, knee, and ankle angles in the sagittal plane and compared it with the angles measured by a VICON system. We calculated the correlation between both method's angular progressions, mean squared error (MSE), mean average error (MAE), and the maximum and minimum angular error and run statistical parametric mapping (SPM) analysis. Pearson correlation coefficients (r) for hip, knee, and ankle angular progressions in the sagittal plane during the entire movement were 0.96, 0.99, and 0.87, respectively. SPM group‐analysis revealed some significant differences only for ankle angular progression. MSE was below 5.7°, MAE was below 4.5°, and error for maximum amplitudes was below 3.2°. The smartphone AI motion capture system with the trained multistage computer vision pipeline was able to detect, especially hip and knee angles in the sagittal plane during CMJs with high precision from a frontal view only.
Funder
Universität Konstanz
Bundesministerium für Bildung und Forschung