Note-level singing melody transcription with transformers

Author:

Park Jonggwon1,Choi Kyoyun2,Oh Seola1,Kim Leekyung1,Park Jonghun1

Affiliation:

1. Department of Industrial Engineering and Institute for Industrial Systems Innovation, Seoul National University, Seoul, Korea

2. Institute of Engineering Research, Seoul National University, Seoul, Korea

Abstract

Recognizing a singing melody from an audio signal in terms of the music notes’ pitch onset and offset, referred to as note-level singing melody transcription, has been studied as a critical task in the field of automatic music transcription. The task is challenging due to the different timbre and vibrato of each vocal and the ambiguity of onset and offset of the human voice compared with other instrumental sounds. This paper proposes a note-level singing melody transcription model using sequence-to-sequence Transformers. The singing melody annotation is expressed as a monophonic melody sequence and used as a decoder sequence. Overlapping decoding is introduced to solve the problem of the context between segments being broken. Applying pitch augmentation and and adding noisy dataset with data cleansing turns out to be effective in preventing overfitting and generalizing the model performance. Ablation studies demonstrate the effects of the proposed techniques in note-level singing melody transcription, both quantitatively and qualitatively. The proposed model outperforms other models in note-level singing melody transcription performance for all the metrics considered. For fundamental frequency metrics, the voice detection performance of the proposed model is comparable to that of a vocal melody extraction model. Finally, subjective human evaluation demonstrates that the results of the proposed models are perceived as more accurate than the results of a previous study.

Publisher

IOS Press

Subject

Artificial Intelligence,Computer Vision and Pattern Recognition,Theoretical Computer Science

Reference33 articles.

1. E. Molina, A.M. Barbancho, L.J. Tardón and I. Barbancho, Evaluation framework for automatic singing transcription, in: Proceedings of International Society for Musical Information Retrieval (ISMIR), Taipei, Taiwan, 2014, pp. 567–572.

2. M. Goto, AIST Annotation for the RWC Music Database, in: Proceedings of International Society for Musical Information Retrieval (ISMIR), Victoria, Canada, 2006, pp. 359–360.

3. Towards computer-assisted flamenco transcription: An experimental comparison of automatic transcription algorithms as applied to a cappella singing;Gómez;Computer Music Jounrnal,2013

4. G. Meseguer-Brocal, A. Cohen-Hadria and G. Peeters, DALI: a large Dataset of synchronized Audio, LyrIcs and notes, automatically created using teacher-student machine learning paradigm, in: Proceedings of International Society for Musical Information Retrieval (ISMIR), Paris, France, 2018, pp. 431–437.

5. G. Meseguer-Brocal, R. Bittner, S. Durand and B. Brost, Data Cleansing with Contrastive Learning for Vocal Note Event Annotations, in: Proceedings of International Society for Musical Information Retrieval (ISMIR), Montréal, Canada, 2020, pp. 255–262.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3