Author:
Hung Nguyen,Loi Thang,Binh Nguyen,Nga Nguyen,Huong Truong,Luu Duc
Abstract
Jumping motion recognition via video is a significant contribution because it considerably impacts intelligent applications and will be widely adopted in life. This method can be used to train future dancers using innovative technology. Challenging poses will be repeated and improved over time, reducing the strain on the instructor when performing multiple times. Dancers can also be recreated by removing features from their images. To recognize the dancers’ moves, check and correct their poses, and another important aspect is that our model can extract cognitive features for efficient evaluation and classification, and deep learning is currently one of the best ways to do this for short-form video features capabilities. In addition, evaluating the quality of the performance video, the accuracy of each dance step is a complex problem when the eyes of the judges cannot focus 100% on the dance on the stage. Moreover, dance on videos is of great interest to scientists today, as technology is increasingly developing and becoming useful to replace human beings. Based on actual conditions and needs in Vietnam. In this paper, we propose a method to replace manual evaluation, and our approach is used to evaluate dance through short videos. In addition, we conduct dance analysis through short-form videos, thereby applying techniques such as deep learning to assess and collect data from which to draw accurate conclusions. Experiments show that our assessment is relatively accurate when the accuracy and F1-score values are calculated. More than 92.38% accuracy and 91.18% F1-score, respectively. This demonstrates that our method performs well and accurately in dance evaluation analysis.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献