Abstract
The use of the computer as a sound generator is omnipresent in current music production and ranges from music notation programs playing back samples via MIDI control to specially programmed sound synthesis programs. The term ‚computer‘ is generally understood as a complete set of hardware and software. But a closer look at this complete set is definitely worthwhile and poses some systematical challenges. In the early days of digital sound synthesis in real time, the hardware is strongly connected to the resulting sound. The control was done by means of a programming language or a specially designed software, which offered more or less possibilities of intervention, depending on the stage of development. But do these sound generators actually fulfill the definition of a musical instrument – and what exactly is that definition? What about the so-called software instruments, which, partly hardware-independent, allow users to play music? How can and should interfaces be classified seeing that hardware extensions developed specifically for musical use, but still need (special) software and other technical equipment for sound generation and, above all, output? And who actually decides on the sound and handling of the new instrument, since the integration of computers into musical works usually takes place in close cooperation between composers, musicians, engineers and programmers? In order to be able to discuss these questions, not only new methodological approaches but also cooperation between the disciplines is unavoidable and at the same time rewarding.
Publisher
Musikwissenschaftliches Seminar der Universität Paderborn und der Hochschule für Musik Detmold
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Composing AI;KI-Kritik / AI Critique;2023-05-09