Affiliation:
1. O.M. Beketov National University of Urban Economy in Kharkiv
Abstract
This article explores the creation of music through the automated generation of sounds from images. The developed automatic image sound generation method is based on the joint use of neural networks and light-music theory. Translating visual art into music using machine learning models can be used to make extensive museum collections accessible to the visually impaired by translating artworks from an inaccessible sensory modality (sight) to an accessible one (hearing). Studies of other audio-visual models have shown that previous research has focused on improving model performance with multimodal information, as well as improving the accessibility of visual information through audio presentation, so the work process consists of two parts. The result of the work of the first part of the algorithm for determining the tonality of a piece is a graphic annotation of the transformation of the graphic image into a musical series using all colour characteristics, which is transmitted to the input of the neural network. While researching sound synthesis methods, we considered and analysed the most popular ones: additive synthesis, FM synthesis, phase modulation, sampling, table-wave synthesis, linear-arithmetic synthesis, subtractive synthesis, and vector synthesis. Sampling was chosen to implement the system. This method gives the most realistic sound of instruments, which is an important characteristic. The second task of generating music from an image is performed by a recurrent neural network with a two-layer batch LSTM network with 512 hidden units in each LSTM cell, which assembles spectrograms from the input line of the image and converts it into an audio clip. Twenty-nine compositions of modern music were used to train the network. To test the network, we compiled a set of ten test images of different types (abstract images, landscapes, cities, and people) on which the original musical compositions were obtained and stored. In conclusion, it should be noted that the composition generated from abstract images is more pleasant to the ear than the generation from landscapes. In general, the overall impression of the generated compositions is positive.
Keywords: recurrent neural network, light music theory, spectrogram, generation of compositions.
Publisher
O.M.Beketov National University of Urban Economy in Kharkiv
Reference9 articles.
1. Chervinska, N. (2022, August 12). Generating Music with AI: How it Works. Depositphotos. Retrieved from https://blog.depositphotos.com/ua/yak-shtuchnyj-intelekt-stvoryuye-muzyku.html
2. Engel, J., Agrawal, K. K., Chen, S., Gulrajani, I., Donahue, C., & Roberts, A. (2019). GANSynth: Adversarial Neural Audio Synthesis. Proceedings of the 7th International Conference on Learning Representations (ICLR) (17 p.). DOI: 10.48550/arXiv.1902.08710
3. Caivano, J. L. (1994). Color and Sound: Physical and Psychophysical Relations. Color Research and Application, 19(2), 126–132. DOI: 10.1111/j.1520-6378.1994.tb00072.x
4. Komarskyi, O. S., & Doroshenko, A. Yu. (2022). Recurrent neural network model for music generation. Problems in programming, 1, 87–93. DOI: 10.15407/pp.2022.01.87 [in Ukrainian]
5. Roberts, A., Engel, J., Raffel, C., Hawthorne, C., & Eck, D. (2018). A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music. Proceedings of the 35th International Conference on Machine Learning (ICML) (pp. 4364–4373). Proceedings of Machine Learning Research (PMLR). Retrieved from http://proceedings.mlr.press/v80/roberts18a/roberts18a.pdf