Author:
Visi Federico,Schramm Rodrigo,Frödin Kerstin,Unander-Scharin Åsa,Östersjö Stefan
Abstract
AbstractIn this chapter, we describe a series of studies related to our research on using gestural sonic objects in music analysis. These include developing a method for annotating the qualities of gestural sonic objects on multimodal recordings; ranking which features in a multimodal dataset are good predictors of basic qualities of gestural sonic objects using the Random Forests algorithm; and a supervised learning method for automated spotting designed to assist human annotators. The subject of our analyses is a performance of Fragmente2, a choreomusical composition based on the Japanese composer Makoto Shinohara’s solo piece for tenor recorder Fragmente (1968). To obtain the dataset, we carried out a multimodal recording of a full performance of the piece and obtained synchronised audio, video, motion, and electromyogram (EMG) data describing the body movements of the performers. We then added annotations on gestural sonic objects through dedicated qualitative analysis sessions. The task of annotating gestural sonic objects on the recordings of this performance has led to a meticulous examination of related theoretical concepts to establish a method applicable beyond this case study. This process of gestural sonic object annotation—like other qualitative approaches involving manual labelling of data—has proven to be very time-consuming. This motivated the exploration of data-driven, automated approaches to assist expert annotators.
Publisher
Springer Nature Switzerland
Reference39 articles.
1. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer (2006)
2. Blackwell, J.R., Kornatz, K.W., Heath, E.M.: Effect of grip span on maximal grip force and fatigue of flexor digitorum superficialis. Appl. Ergon. 30(5), 401–405 (1999). https://doi.org/10.1016/S0003-6870(98)00055-6
3. Bortolozzo, M., Schramm, R., Jung, C.R.: Improving the classification of rare chords with unlabeled data. In: ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3390–3394 (2021). https://doi.org/10.1109/ICASSP39728.2021.9413701
4. Camurri, A., Lagerlöf, I., Volpe, G.: Recognizing emotion from dance movement: comparison of spectator recognition and automated techniques. Int. J. Hum. Comput. Stud. 59(1–2), 213–225 (2003). https://doi.org/10.1016/S1071-5819(03)00050-8
5. Christensen, E.: Music Listening, Music Therapy, Phenomenology and Neuroscience [PhD]. Aalborg University (2012)