Abstract
AbstractPassive acoustic surveys provide a convenient and cost-effective way to monitor animal populations. Methods for conducting and analysing such surveys, especially for performing automated call recognition from sound recordings, are undergoing rapid development. However, no standard metric exists to evaluate the proposed changes. Furthermore, most metrics that are currently used are specific to a single stage of the survey workflow, and therefore may not reflect the overall effects of a design choice.Here, we attempt to define and evaluate the effectiveness of surveys conducted in two common frameworks of population inference – occupancy modelling and spatially explicit capture-recapture (SCR). Specifically, we investigate precision (standard error of the final estimate) as a possible metric of survey performance, but we show that it does not lead to generally optimal designs in occupancy modelling. In contrast, precision of the SCR density estimate can be optimised with fewer experiment-specific parameters. We illustrate these issues using simulations.We further demonstrate how SCR precision can be used to evaluate design choices on a field survey of little spotted kiwi (Apteryx owenii). We show that precision correctly measures tradeoffs involving sampling effort. As a case study, we compare automated call recognition software with human annotations. The proposed metric captured the tradeoff between missed calls (8% loss of precision when using the software) and faster data through-put (60% gain), while common metrics based on per-second agreement failed to identify optimal improvements and could be inflated by deleting data.Due to the flexibility of SCR framework, the approach presented here can be applied to a wide range of different survey designs. As the precision is directly related to the power of detecting temporal trends or other effects in the subsequent inference, this metric evaluates design choices at the application level, and can capture tradeoffs that are missed by stage-specific metrics, thus enabling reliable comparison between different experimental designs and analysis methods.
Publisher
Cold Spring Harbor Laboratory