Action Quality Assessment Model Using Specialists’ Gaze Location and Kinematics Data—Focusing on Evaluating Figure Skating Jumps
Author:
Hirosawa Seiji12ORCID, Kato Takaaki3, Yamashita Takayoshi4, Aoki Yoshimitsu1ORCID
Affiliation:
1. Graduate School of Science and Technology, Keio University, Yokohama 223-8522, Japan 2. Faculty of Sport and Health Sciences, Toin University of Yokohama, Yokohama 225-8503, Japan 3. Faculty of Environment and Information Studies, Keio University, Fujisawa 252-0882, Japan 4. College of Engineering, Chubu University, Kasugai 487-8501, Japan
Abstract
Action quality assessment (AQA) tasks in computer vision evaluate action quality in videos, and they can be applied to sports for performance evaluation. A typical example of AQA is predicting the final score from a video that captures an entire figure skating program. However, no previous studies have predicted individual jump scores, which are of great interest to competitors because of the high weight of competition. Despite the presence of unnecessary information in figure skating videos, human specialists can focus and reduce information when they evaluate jumps. In this study, we clarified the eye movements of figure skating judges and skaters while evaluating jumps and proposed a prediction model for jump performance that utilized specialists’ gaze location to reduce information. Kinematic features obtained from the tracking system were input into the model in addition to videos to improve accuracy. The results showed that skaters focused more on the face, whereas judges focused on the lower extremities. These gaze locations were applied to the model, which demonstrated the highest accuracy when utilizing both specialists’ gaze locations. The model outperformed human predictions and the baseline model (RMSE:0.775), suggesting a combination of human specialist knowledge and machine capabilities could yield higher accuracy.
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference39 articles.
1. Lei, Q., Du, J.X., Zhang, H.B., Ye, S., and Chen, D.S. (2019). A Survey of Vision-Based Human Action Evaluation Methods. Sensors, 19. 2. Parmar, P. (2019). On Action Quality Assessment, UNLV Thesis, Dissertations, Professional Papers, and Capstones, University of Nevada, Las Vegas. 3. Tran, D., Bourdev, L., Fergus, R., Torresani, L., and Paluri, M. (2015, January 7–13). Learning Spatiotemporal Features with 3D Convolutional Networks. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Washington, DC, USA. 4. Carreira, J., and Zisserman, A. (2017, January 21–26). Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA. 5. Pirsiavash, H., Vondrick, C., and Torralba, A. (2014, January 6–12). Assessing the Quality of Actions. Proceedings of the European Conference on Computer Vision, Zurich, Switzerland.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|