Affiliation:
1. University of Science and Technology of China, Hefei, Anhui, China
2. Huawei Cloud8AI, Hangzhou, Zhejiang, China
3. Microsoft, Suzhou, China
Abstract
Music plays an important role in our daily life. With the development of deep learning and modern generation techniques, researchers have done plenty of works on automatic music generation. However, due to the special requirements of both melody and arrangement, most of these methods have limitations when applying to multi-track music generation. Some critical factors related to the quality of music are not well addressed, such as chord progression, rhythm pattern, and musical style. In order to tackle the problems and ensure the harmony of multi-track music, in this article, we propose an end-to-end melody and arrangement generation framework to generate a melody track with several accompany tracks played by some different instruments. To be specific, we first develop a novel
Chord based Rhythm and Melody Cross-Generation Model
to generate melody with a chord progression. Then, we propose a
Multi-Instrument Co-Arrangement Model
based on multi-task learning for multi-track music arrangement. Furthermore, to control the musical style of arrangement, we design a
Multi-Style Multi-Instrument Co-Arrangement Model
to learn the musical style with adversarial training. Therefore, we can not only maintain the harmony of the generated music but also control the musical style for better utilization. Extensive experiments on a real-world dataset demonstrate the superiority and effectiveness of our proposed models.
Funder
National Key Research and Development Program of China
National Natural Science Foundation of China
Youth Innovation Promotion Association of the Chinese Academy of Sciences
Publisher
Association for Computing Machinery (ACM)
Cited by
23 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献