Generating chord progression from melody with flexible harmonic rhythm and controllable harmonic density
-
Published:2024-01-15
Issue:1
Volume:2024
Page:
-
ISSN:1687-4722
-
Container-title:EURASIP Journal on Audio, Speech, and Music Processing
-
language:en
-
Short-container-title:J AUDIO SPEECH MUSIC PROC.
Author:
Wu Shangda,Yang Yue,Wang Zhaowen,Li Xiaobing,Sun Maosong
Abstract
AbstractMelody harmonization, which involves generating a chord progression that complements a user-provided melody, continues to pose a significant challenge. A chord progression must not only be in harmony with the melody, but also interdependent on its rhythmic pattern. While previous neural network-based systems have been successful in producing chord progressions for given melodies, they have not adequately addressed controllable melody harmonization, nor have they focused on generating harmonic rhythms with flexibility in the rates or patterns of chord changes. This paper presents AutoHarmonizer, a novel system for harmonic density-controllable melody harmonization with such a flexible harmonic rhythm. AutoHarmonizer is equipped with an extensive vocabulary of 1462 chord types and can generate chord progressions that vary in harmonic density for a given melody. Experimental results indicate that the AutoHarmonizer-generated chord progressions exhibit a diverse range of harmonic rhythms and that the system’s controllable harmonic density is effective.
Funder
National Social Science Fund of China
Publisher
Springer Science and Business Media LLC
Reference29 articles.
1. A. Liu, L. Zhang, Y. Mei, B. Han, Z. Cai, Z. Zhu, J. Xiao, in MMPT@ICMR2021: Proceedings of the 2021 Workshop on Multi-Modal Pre-Training for Multimedia Understanding, Taipei, Taiwan, August 21, 2021, ed. by B. Liu, J. Fu, S. Chen, Q. Jin, A.G. Hauptmann, Y. Rui. Residual recurrent CRNN for end-to-end optical music recognition on monophonic scores (ACM, 2021), pp. 23–27. https://doi.org/10.1145/3463945.3469056 2. J. Calvo-Zaragoza, D. Rizo, in Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR 2018, Paris, France, September 23-27, 2018, ed. by E. Gómez, X. Hu, E. Humphrey, E. Benetos. Camera-primus: Neural end-to-end optical music recognition on realistic monophonic scores (2018), pp. 248–255. http://ismir2018.ircam.fr/doc/pdfs/33_Paper.pdf 3. D. Ghosal, M.H. Kolekar, in Interspeech 2018, 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, 2-6 September 2018, ed. by B. Yegnanarayana. Music genre recognition using deep neural networks and transfer learning (ISCA, 2018), pp. 2087–2091. https://doi.org/10.21437/Interspeech.2018-2045 4. E. Dervakos, N. Kotsani, G. Stamou, in Artificial Intelligence in Music, Sound, Art and Design - 10th International Conference, EvoMUSART 2021, Held as Part of EvoStar 2021, Virtual Event, April 7-9, 2021, Proceedings, Lecture Notes in Computer Science, vol. 12693, ed. by J. Romero, T. Martins, N. Rodríguez-Fernández. Genre recognition from symbolic music with cnns (Springer, 2021), pp. 98–114. https://doi.org/10.1007/978-3-030-72914-1_7 5. J. Briot, G. Hadjeres, F. Pachet, Deep learning techniques for music generation - A survey. CoRR abs/1709.01620 (2017), accessed on November 27, 2023.
@article{DBLP:journals/corr/abs-1709-01620,
|
|