Affiliation:
1. Interactive Intelligent Systems Lab., The University of Tokyo, Tokyo, Japan
Abstract
The beauty of synchronized dancing lies in the synchronization of body movements among multiple dancers. While dancers utilize camera recordings for their practice, standard video interfaces do not efficiently support their activities of identifying segments where they are not well synchronized. This thus fails to close a tight loop of an iterative practice process (i.e., capturing a practice, reviewing the video, and practicing again). We present SyncUp, a system that provides multiple interactive visualizations to support the practice of synchronized dancing and liberate users from manual inspection of recorded practice videos. By analyzing videos uploaded by users, SyncUp quantifies two aspects of synchronization in dancing: pose similarity among multiple dancers and temporal alignment of their movements. The system then highlights which body parts and which portions of the dance routine require further practice to achieve better synchronization. The results of our system evaluations show that our pose similarity estimation and temporal alignment predictions were correlated well with human ratings. Participants in our qualitative user evaluation expressed the benefits and its potential use of SyncUp, confirming that it would enable quick iterative practice.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Hardware and Architecture,Human-Computer Interaction
Reference50 articles.
1. YouMove
2. 2D Human Pose Estimation: New Benchmark and State of the Art Analysis
3. Neurocognitive control in dance perception and performance
4. A system to support the learning of movement qualities in dance
5. Zhe Cao Gines Hidalgo Tomas Simon Shih-En Wei and Yaser Sheikh. 2019. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. arXiv:1812.08008 Zhe Cao Gines Hidalgo Tomas Simon Shih-En Wei and Yaser Sheikh. 2019. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. arXiv:1812.08008
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Expanding the Design Space of Vision-based Interactive Systems for Group Dance Practice;Designing Interactive Systems Conference;2024-07
2. Dance with Rhythmic Frames: Improving Dancing Skills by Frame-by-Frame Presentation;Proceedings of the 9th International Conference on Movement and Computing;2024-05-30
3. Enhancing Seamless Body Movement Learning with Frame-by-Frame VR Presentation;2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW);2024-03-16
4. Spatial-Temporal Masked Autoencoder for Multi-Device Wearable Human Activity Recognition;Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies;2023-12-19
5. SYNCUP: Vision-Based Practice Support for Synchronized Dancing;GetMobile: Mobile Computing and Communications;2023-08-03