Abstract
In recent years, deep learning has seen remarkable progress in many fields, especially with many excellent pre-training models emerged in Natural Language Processing(NLP). However, these pre-training models can not be used directly in music generation tasks due to the different representations between music symbols and text. Compared with the traditional presentation method of music melody that only includes the pitch relationship between single notes, the text-like representation method proposed in this paper contains more melody information, including pitch, rhythm and pauses, which expresses the melody in a form similar to text and makes it possible to use existing pre-training models in symbolic melody generation. In this paper, based on the generative pre-training-2(GPT-2) text generation model and transfer learning we propose MT-GPT-2(music textual GPT-2) model that is used in music melody generation. Then, a symbolic music evaluation method(MEM) is proposed through the combination of mathematical statistics, music theory knowledge and signal processing methods, which is more objective than the manual evaluation method. Based on this evaluation method and music theories, the music generation model in this paper are compared with other models (such as long short-term memory (LSTM) model,Leak-GAN model and Music SketchNet). The results show that the melody generated by the proposed model is closer to real music.
Funder
Xihua University Graduate Innovation Fund
Intelligent Terminal Key Laboratory of SiChuan Province
National Natural Science Foundation of China
Publisher
Public Library of Science (PLoS)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献