Abstract
Music has always been thought of as a “human” endeavor- when praising a piece of music, we emphasize the composer’s creativity and the emotions the music invokes. Because music also heavily relies on patterns and repetition in the form of recurring melodic themes and chord progressions, artificial intelligence has increasingly been able to replicate music in a human-like fashion. This research investigated the capabilities of Jukebox, an open-source commercially available neural network, to accurately replicate two genres of music often found in rhythm games, artcore and orchestral. A Google Colab notebook provided the computational resources necessary to sample and extend a total of 16 piano arrangements of both genres. A survey containing selected samples was distributed to a local youth orchestra to gauge people’s perceptions of the musicality of AI and human-generated music. Even though humans preferred human-generated music, Jukebox’s slightly high rating showed that it was somewhat capable at mimicking the styles of both genres. Despite limitations of Jukebox only using raw audio and a relatively small sample size, it shows promise for the future of AI as a collaborative tool in music production.
Reference12 articles.
1. Coarse-to-fine framework for music generation via generative adversarial networks;Cao,2020
2. Pop music generation: from melody to multi-style arrangement;Chen;ACM Trans. Knowl. Discov. Data,2020
3. Automatic conversion of sound to the MIDI-format;Forberg;TMH-QPSR Depart. Speech Music Hear.,1998
4. Automatic melody composition using enhanced GAN;Li,2019