Affiliation:
1. Department of Mathematics and Computer Science, University of Cagliari, Cagliari, Sardegna, Italy
2. Department of Music, Fordham University, New York, United States of America
Abstract
Music is an extremely subjective art form whose commodification via the recording industry in the 20th century has led to an increasingly subdivided set of genre labels that attempt to organize musical styles into definite categories. Music psychology has been studying the processes through which music is perceived, created, responded to, and incorporated into everyday life, and, modern artificial intelligence technology can be exploited in such a direction. Music classification and generation are emerging fields that gained much attention recently, especially with the latest discoveries within deep learning technologies. Self attention networks have in fact brought huge benefits for several tasks of classification and generation in different domains where data of different types were used (text, images, videos, sounds). In this article, we want to analyze the effectiveness of Transformers for both classification and generation tasks and study the performances of classification at different granularity and of generation using different human and automatic metrics. The input data consist of MIDI sounds that we have considered from different datasets: sounds from 397 Nintendo Entertainment System video games, classical pieces, and rock songs from different composers and bands. We have performed classification tasks within each dataset to identify the types or composers of each sample (fine-grained) and classification at a higher level. In the latter, we combined the three datasets together with the goal of identifying for each sample just NES, rock, or classical (coarse-grained) pieces. The proposed transformers-based approach outperformed competitors based on deep learning and machine learning approaches. Finally, the generation task has been carried out on each dataset and the resulting samples have been evaluated using human and automatic metrics (the local alignment).
Reference42 articles.
1. Multi-domain sentiment analysis with mimicked and polarized word embeddings for human-robot interaction;Atzeni;Future Generation Computer Systems,2020
2. Wav2vec 2.0: a framework for self-supervised learning of speech representations;Baevski,2020
3. Deep learning and time series-to-image encoding for financial forecasting;Barra;IEEE/CAA Journal of Automatica Sinica,2020
4. Automatic classification of MIDI tracks;Bernardo,2008
5. Melodic similarity and applications using biologically-inspired techniques;Bountouridis;Applied Sciences,2017
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. An Electronic Music Classification Model Based on Machine Learning Algorithm to Optimize Children's Neural Network;2023 3rd International Conference on Smart Generation Computing, Communication and Networking (SMART GENCON);2023-12-29