Affiliation:
1. Rajiv Gandhi College of Engineering Research and Technology
Abstract
Inactivity is one of the main causes of obesity which has affected many people worldwide. Studies show that fitness is an important goal for a healthy lifestyle and is been used as a measurement for health-related quality of life. A fitness trainer can motivate and teach users to do exercise daily and stay fit and healthy. However, to use a fitness trainer might involve a huge cost and sometimes is not suitable for a certain setting. Exercises are very beneficial for personal health but they can also be ineffective and truly dangerous if performed in an incorrect method by the user. There are lot of mistakes made during a workout when user workout alone without supervision like wrong form which could result fatal for user as they can pull a hamstring or even fall due to it. In our project, we introduce AI Trainer, an application that detects the user’s exercise pose and provides personalized, detailed recommendations on how the user can improve their form. Pose Trainer uses the state of the art in pose estimation module known as “BlazePose” tool from “MediaPipe” to detect a user’s pose, then evaluates the pose of an exercise to provide useful feedback. We record a dataset of over 1000 keypoints coordinate of parts of body in correct and incorrect form, based on personal training guidelines, we build a machine learning algorithm for evaluation. AI Trainer works on six common exercises and supports any Windows or Linux computer with a GPU and a webcam
Reference13 articles.
1. [1] S. Jin, L. Xu, J. Xu, C. Wang, W. Liu, C. Qian, W. Ouyang and P. Luo, “Whole-Body Human Pose Estimation in the Wild”, In book: Computer Vision – ECCV 2020 (pp.196-214).
2. [2] G. Taware, R. Agarwal, P. Dhende, P. Jondhalekar and Prof. S. Hule, “AI Based Workout Assistant and Fitness Guide”, International Journal of Engineering Research & Technology (IJERT) Vol. 10 Issue 11, November-2021.
3. [3] F. Zhang, V. Bazarevsky, A. Vakunov, A. Tkachenka, G. Sung, C.L. Chang and M. Grundmann, “MediaPipe Hands: On device Real-time Hand Tracking.” ArXiv, 2020.
4. [4] Y. Kartynnik, A. Ablavatski, I. Grishchenko, and M. Grundman, “Real- time facial surface geometry from monocular video on mobile gpus.”, IEEE/CVPR Workshop on Computer Vision for Augmented and Virtual Reality, July 2019.
5. [5] S. Kreiss, L. Bertoni and A. Alahi, “PifPaf: Composite Fields for Human Pose Estimation”, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.