RoboFinch: A versatile audio‐visual synchronised robotic bird model for laboratory and field research on songbirds

Author:

Simon Ralph12ORCID,Varkevisser Judith3ORCID,Mendoza Ezequiel4ORCID,Hochradel Klaus5ORCID,Elsinga Rogier1,Wiersma Peter G.1,Middelburg Esmee3,Zoeter Eva3,Scharff Constance4ORCID,Riebel Katharina3ORCID,Halfwerk Wouter1ORCID

Affiliation:

1. Department of Ecological Science VU University Amsterdam Amsterdam The Netherlands

2. Behavioral Ecology and Conservation Lab Nuremberg Zoo Nuremberg Germany

3. Institute of Biology Leiden Leiden University Leiden The Netherlands

4. Department of Animal Behavior, Institute of Biology Freie Universität Berlin Berlin Germany

5. Institute of Measurement and Sensor Technology UMIT‐Private University for Health Sciences, Medical Informatics and Technology GmbH Hall in Tirol Austria

Abstract

Abstract Singing in birds is accompanied by beak, head and throat movements. The role of these visual cues has long been hypothesised to be an important facilitator in vocal communication, including social interactions and song acquisition, but has seen little experimental study. To address whether audio‐visual cues are relevant for birdsong we used high‐speed video recording, 3D scanning, 3D printing technology and colour‐realistic painting to create RoboFinch, an open source adult‐mimicking robot which matches temporal and chromatic properties of songbird vision. We exposed several groups of juvenile zebra finches during their song developmental phase to one of six singing robots that moved their beaks synchronised to their song and compared them with birds in a non‐synchronised and two control treatments. Juveniles in the synchronised treatment approached the robot setup from the start of the experiment and progressively increased the time they spent singing, contra to the other treatment groups. Interestingly, birds in the synchronised group seemed to actively listen during tutor song playback, as they sung less during the actual song playback compared to the birds in the asynchronous and audio‐only control treatments. Our open source RoboFinch setup thus provides an unprecedented tool for systematic study of the functionality and integration of audio‐visual cues associated with song behaviour. Realistic head and beak movements aligned to specific song elements may allow future studies to assess the importance of multisensory cues during song development, sexual signalling and social behaviour. All software and assembly instructions are open source, and the robot can be easily adapted to other species. Experimental manipulations of stimulus combinations and synchronisation can further elucidate how audio‐visual cues are integrated by receivers and how they may enhance signal detection, recognition, learning and memory.

Funder

Human Frontier Science Program

Publisher

Wiley

Subject

Ecological Modeling,Ecology, Evolution, Behavior and Systematics

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3