Author:
Bergler Christian,Smeele Simeon Q.,Tyndel Stephen A.,Barnhill Alexander,Ortiz Sara T.,Kalan Ammie K.,Cheng Rachael Xi,Brinkløv Signe,Osiecka Anna N.,Tougaard Jakob,Jakobsen Freja,Wahlberg Magnus,Nöth Elmar,Maier Andreas,Klump Barbara C.
Abstract
AbstractBioacoustic research spans a wide range of biological questions and applications, relying on identification of target species or smaller acoustic units, such as distinct call types. However, manually identifying the signal of interest is time-intensive, error-prone, and becomes unfeasible with large data volumes. Therefore, machine-driven algorithms are increasingly applied to various bioacoustic signal identification challenges. Nevertheless, biologists still have major difficulties trying to transfer existing animal- and/or scenario-related machine learning approaches to their specific animal datasets and scientific questions. This study presents an animal-independent, open-source deep learning framework, along with a detailed user guide. Three signal identification tasks, commonly encountered in bioacoustics research, were investigated: (1) target signal vs. background noise detection, (2) species classification, and (3) call type categorization. ANIMAL-SPOT successfully segmented human-annotated target signals in data volumes representing 10 distinct animal species and 1 additional genus, resulting in a mean test accuracy of 97.9%, together with an average area under the ROC curve (AUC) of 95.9%, when predicting on unseen recordings. Moreover, an average segmentation accuracy and F1-score of 95.4% was achieved on the publicly available BirdVox-Full-Night data corpus. In addition, multi-class species and call type classification resulted in 96.6% and 92.7% accuracy on unseen test data, as well as 95.2% and 88.4% regarding previous animal-specific machine-based detection excerpts. Furthermore, an Unweighted Average Recall (UAR) of 89.3% outperformed the multi-species classification baseline system of the ComParE 2021 Primate Sub-Challenge. Besides animal independence, ANIMAL-SPOT does not rely on expert knowledge or special computing resources, thereby making deep-learning-based bioacoustic signal identification accessible to a broad audience.
Funder
Deutsche Forschungsgemeinschaft
Friedrich-Alexander-Universität Erlangen-Nürnberg
Publisher
Springer Science and Business Media LLC
Reference62 articles.
1. Sugai, L. S. M., Silva, T. S. F., Ribeiro, J., José, Wagner & Llusia, D. Terrestrial passive acoustic monitoring: Review and perspectives. BioScience 69, 15–25. https://doi.org/10.1093/biosci/biy147 (2018).
2. Symes, L. B. et al. Analytical approaches for evaluating passive acoustic monitoring data: A case study of avian vocalizations. Ecol. Evol. 12, e8797. https://doi.org/10.1002/ece3.8797 (2022).
3. Van Hoeck, R. V. et al. Passive acoustic monitoring complements traditional methods for assessing marine habitat enhancement outcomes. Ecosphere 12, e03840. https://doi.org/10.1002/ecs2.3840 (2021).
4. Ness, S. The Orchive : A system for semi-automatic annotation and analysis of a large collection of bioacoustic recordings. Ph.D. thesis, Department of Computer Science, University of Victoria, 3800 Finnerty Road, Victoria, British Columbia, Canada, V8P 5C2 (2013).
5. Allen, A. N. et al. A convolutional neural network for automated detection of humpback whale song in a diverse, long-term passive acoustic dataset. Front. Mar. Sci. https://doi.org/10.3389/fmars.2021.607321 (2021).
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献