Exploring the Use of Contrastive Language-Image Pre-Training for Human Posture Classification: Insights from Yoga Pose Analysis
-
Published:2023-12-25
Issue:1
Volume:12
Page:76
-
ISSN:2227-7390
-
Container-title:Mathematics
-
language:en
-
Short-container-title:Mathematics
Author:
Dobrzycki Andrzej D.1ORCID, Bernardos Ana M.1ORCID, Bergesio Luca1ORCID, Pomirski Andrzej2ORCID, Sáez-Trigueros Daniel3ORCID
Affiliation:
1. Information Processing and Telecommunications Center, ETSI Telecomunicación, Universidad Politécnica de Madrid, Av. Complutense, 30, 28040 Madrid, Spain 2. Alexa AI, Aleja Grunwaldzka 472, 80-309 Gdańsk, Poland 3. Alexa AI, C. de Ramírez de Prado, 5, 28045 Madrid, Spain
Abstract
Accurate human posture classification in images and videos is crucial for automated applications across various fields, including work safety, physical rehabilitation, sports training, or daily assisted living. Recently, multimodal learning methods, such as Contrastive Language-Image Pretraining (CLIP), have advanced significantly in jointly understanding images and text. This study aims to assess the effectiveness of CLIP in classifying human postures, focusing on its application in yoga. Despite the initial limitations of the zero-shot approach, applying transfer learning on 15,301 images (real and synthetic) with 82 classes has shown promising results. The article describes the full procedure for fine-tuning, including the choice for image description syntax, models and hyperparameters adjustment. The fine-tuned CLIP model, tested on 3826 images, achieves an accuracy of over 85%, surpassing the current state-of-the-art of previous works on the same dataset by approximately 6%, its training time being 3.5 times lower than what is needed to fine-tune a YOLOv8-based model. For more application-oriented scenarios, with smaller datasets of six postures each, containing 1301 and 401 training images, the fine-tuned models attain an accuracy of 98.8% and 99.1%, respectively. Furthermore, our experiments indicate that training with as few as 20 images per pose can yield around 90% accuracy in a six-class dataset. This study demonstrates that this multimodal technique can be effectively used for yoga pose classification, and possibly for human posture classification, in general. Additionally, CLIP inference time (around 7 ms) supports that the model can be integrated into automated systems for posture evaluation, e.g., for developing a real-time personal yoga assistant for performance assessment.
Funder
European Union Ministerio de Ciencia e Innovación Universidad Politécnica de Madrid
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)
Reference55 articles.
1. Karkowski, W., and Genaidy, A.M. (1990). Computer-Aided Ergonomics, Taylor & Francis. 2. A fuzzy logic approach to posture-based ergonomic analysis for field observation and assessment of construction manual operations;Golabchi;Can. J. Civ. Eng.,2016 3. Ergonomic analysis of a working posture in steel industry in Egypt using digital human modeling;Mohammed;SN Appl. Sci.,2020 4. Silva, A.G.d., Winkler, I., Gomes, M.M., and De Melo Pinto, U. (2020, January 7–10). Ergonomic analysis supported by virtual reality: A systematic literature review. Proceedings of the 2020 22nd Symposium on Virtual and Augmented Reality (SVR), Porto de Galinhas, Brazil. 5. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., and Clark, J. (2021). Learning transferable visual models from natural language supervision. arXiv.
|
|