Automated identification of chicken distress vocalizations using deep learning models

Author:

Mao Axiu1ORCID,Giraudet Claire S. E.12ORCID,Liu Kai13ORCID,De Almeida Nolasco Inês4,Xie Zhiqin5,Xie Zhixun5,Gao Yue6,Theobald James7,Bhatta Devaki7,Stewart Rebecca8,McElligott Alan G.12ORCID

Affiliation:

1. Department of Infectious Diseases and Public Health, Jockey Club College of Veterinary Medicine and Life Sciences, City University of Hong Kong, Hong Kong SAR, People's Republic of China

2. Centre for Animal Health and Welfare, Jockey Club College of Veterinary Medicine and Life Sciences, City University of Hong Kong, Hong Kong SAR, People's Republic of China

3. Animal Health Research Centre, Chengdu Research Institute, City University of Hong Kong, Chengdu, People's Republic of China

4. School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK

5. Guangxi Key Laboratory of Veterinary Biotechnology, Guangxi Veterinary Research Institute, 51 North Road You Ai, Nanning 530001, Guangxi, People's Republic of China

6. School of Computer Science and Electronic Engineering, University of Surrey, Guildford, UK

7. Agsenze, Parc House, Kingston Upon Thames, London, UK

8. Dyson School of Design Engineering, Imperial College London, London, UK

Abstract

The annual global production of chickens exceeds 25 billion birds, which are often housed in very large groups, numbering thousands. Distress calling triggered by various sources of stress has been suggested as an ‘iceberg indicator’ of chicken welfare. However, to date, the identification of distress calls largely relies on manual annotation, which is very labour-intensive and time-consuming. Thus, a novel convolutional neural network-based model, light-VGG11, was developed to automatically identify chicken distress calls using recordings (3363 distress calls and 1973 natural barn sounds) collected on an intensive farm. The light-VGG11 was modified from VGG11 with significantly fewer parameters (9.3 million versus 128 million) and 55.88% faster detection speed while displaying comparable performance, i.e. precision (94.58%), recall (94.89%), F1-score (94.73%) and accuracy (95.07%), therefore more useful for model deployment in practice. To additionally improve light-VGG11's performance, we investigated the impacts of different data augmentation techniques (i.e. time masking, frequency masking, mixed spectrograms of the same class and Gaussian noise) and found that they could improve distress calls detection by up to 1.52%. Our distress call detection demonstration on continuous audio recordings, shows the potential for developing technologies to monitor the output of this call type in large, commercial chicken flocks.

Funder

BBSRC

InnovateUK

Publisher

The Royal Society

Subject

Biomedical Engineering,Biochemistry,Biomaterials,Bioengineering,Biophysics,Biotechnology

Cited by 15 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3