Abstract
Today, the majority of music performers and vocalists are not able to read mensural notation fluently, and so conductors and music ensembles require modern editions for performing historical music. However, the conversion of printed white mensural sheet music into audible and performable modern notation currently requires elaborate manual editing by specialized music scholars. To close this gap, the present research proposes an algorithm that automatically converts scanned music score sheets of that historic period (the sixteenth and seventeenth centuries) into a file format that is readable in current notation software. This includes the optical recognition of musical symbols and respective semantic interpretation, for example note pitch determination. Based on works by the composers Sebastian Ertel and Paul Peuerl, the article presents a case study that combines convolutional neural networks with further computational process steps towards an integrated algorithm within a four-step optical music recognition (OMR) approach. As a result, the used musical material could be correctly converted into the MusicXML format with a recognition rate of 99 per cent. In the wake of these promising results, in future research we propose to extend our work to other materials, epochs, note symbols and advanced semantic analyses.
Publisher
Edinburgh University Press
Subject
Human-Computer Interaction,General Arts and Humanities,General Computer Science
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献