Abstract
Abstractthebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study.
Funder
European Research Council
Max-Planck-Gesellschaft
Human Frontier Science Program
Publisher
Springer Science and Business Media LLC
Reference43 articles.
1. Bača, T., Oberholtzer, J., & Treviño, J., & Víctor, Adán. (2015). Abjad: An opensource software system for formalized score control. M. Battier et al. (Eds.), Proceedings of the first international conference on technologies for music notation and representation - tenor2015 (pp. 162–169). Paris, France: Institut de Recherche en Musicologie.
2. Barbero, F. M., Calce, R. P., Talwar, S., Rossion, B. & Collignon, O. (2021). Fast periodic auditory stimulation reveals a robust categorical response to voices in the human brain eNeuro 8(3), ENEURO.0471-20.2021. https://doi.org/10.1523/ENEURO.0471-20.2021
3. Bianco, R., Harrison, P. M., Hu, M., Bolger, C., Picken, S., Pearce, M. T. & Chait, M. (2020). Long-term implicit memory for sequential auditory patterns in humans. eLife 9, e56073. https://doi.org/10.7554/eLife.56073
4. Boersma, P., & Weenink, D. (2022). Praat: Doing phonetics by computer. Retrieved from https://www.praat.org
5. Bosker, H. R. (2017). Accounting for rate-dependent category boundary shifts in speech perception. Attention, Perception, & Psychophysics, 79(1), 333–343. https://doi.org/10.3758/s13414-016-1206-4