Abstract
AbstractIn recent years, the adoption of deep learning techniques has allowed to obtain major breakthroughs in the automatic music generation research field, sparking a renewed interest in generative music. A great deal of work has focused on the possibility of conditioning the generation process in order to be able to create music according to human-understandable parameters. In this paper, we propose a technique for generating chord progressions conditioned on harmonic complexity, as grounded in the Western music theory. More specifically, we consider a pre-existing dataset annotated with the related complexity values and we train two variations of Variational Autoencoders (VAE), namely a Conditional-VAE (CVAE) and a Regressor-based VAE (RVAE), in order to condition the latent space depending on the complexity. Through a listening test, we analyze the effectiveness of the proposed techniques.
Publisher
Springer Science and Business Media LLC
Subject
Electrical and Electronic Engineering,Acoustics and Ultrasonics
Reference64 articles.
1. L. Hiller Jr, L. Isaacson, in Audio Engineering Society Convention 9. Musical composition with a high speed digital computer (Audio Engineering Society, New York, 1957)
2. D. Conklin, I.H. Witten, Multiple viewpoint systems for music prediction. J. New Music. Res. 24(1), 51–73 (1995)
3. F. Pachet, P. Roy, Musical harmonization with constraints: A survey. Constraints 6(1), 7–19 (2001)
4. A.R. François, I. Schankler, E. Chew, Mimi4x: An interactive audio-visual installation for high-level structural improvisation. Int. J. Arts Technol. 6(2), 138–151 (2013)
5. J.P. Briot, F. Pachet, Deep learning for music generation: challenges and directions. Neural Comput. Applic. 32(4), 981–993 (2020)