Advanced Dance Choreography System Using Bidirectional LSTMs
Author:
Yoo Hanha1, Sung Yunsick2ORCID
Affiliation:
1. Department of Computer Engineering, Graduate School, Dongguk University-Seoul, Seoul 04620, Republic of Korea 2. Division of AI Software Convergence, Dongguk University-Seoul, Seoul 04620, Republic of Korea
Abstract
Recently, the craze of K-POP contents is promoting the development of Korea’s cultural and artistic industries. In particular, with the development of various K-POP contents, including dance, as well as the popularity of K-POP online due to the non-face-to-face social phenomenon of the Coronavirus Disease 2019 (COVID-19) era, interest in Korean dance and song has increased. Research on dance Artificial Intelligent (AI), such as artificial intelligence in a virtual environment, deepfake AI that transforms dancers into other people, and creative choreography AI that creates new dances by combining dance and music, is being actively conducted. Recently, the dance creative craze that creates new choreography is in the spotlight. Creative choreography AI technology requires the motions of various dancers to prepare a dance cover. This process causes problems, such as expensive input source datasets and the cost of switching to the target source to be used in the model. There is a problem in that different motions between various dance genres must be considered when converting. To solve this problem, it is necessary to promote creative choreography systems in a new direction while saving costs by enabling creative choreography without the use of expensive motion capture devices and minimizing the manpower of dancers according to consideration of various genres. This paper proposes a system in a virtual environment for automatically generating continuous K-POP creative choreography by deriving postures and gestures based on bidirectional long-short term memory (Bi-LSTM). K-POP dance videos and dance videos are collected in advance as input. Considering a dance video for defining a posture, users who want a choreography, a 3D dance character in the source movie, a new choreography is performed with Bi-LSTM and applied. For learning, considering creativity and popularity at the same time, the next motion is evaluated and selected with probability. If the proposed method is used, the effort for dataset collection can be reduced, and it is possible to provide an intensive AI research environment that generates creative choreography from various existing online dance videos.
Subject
Information Systems and Management,Computer Networks and Communications,Modeling and Simulation,Control and Systems Engineering,Software
Reference21 articles.
1. Analysis of Research Trends on K-pop—Focused on Research Papers from 2011 to 2018;Joonhee;Cult. Converg.,2019 2. Exploring the Changes and Prospects of the Street Dance Environment caused by COVID-19;Sol;J. Korean Soc. Sport. Sci.,2021 3. Li, R., Yang, S., Ross, D.A., and Kanazawa, A. (2021, January 10–17). Ai choreographer: Music Conditioned 3D Dance Generation with AIST++. Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada. 4. Wang, Q., Li, B., Xiao, T., Zhu, J., Li, C., Wong, D.F., and Chao, L.S. (2019). Learning deep transformer models for machine translation. arXiv. 5. Umino, B., and Soga, A. (2014, January 17–19). Automatic Composition Software for Three Genres of Dance Using 3D Motion Data. Proceedings of the 17th Generative Art Conference, Rome, Italy.
|
|