Author:
Haruvi Aia,Kopito Ronen,Brande-Eilat Noa,Kalev Shai,Kay Eitan,Furman Daniel
Abstract
AbstractThe goal of this study was to investigate the effect of sounds on human focus and to identify the properties that contribute most to increasing and decreasing focus in people within their natural, everyday environment. Participants (N=62, 18-65y) performed various tasks on a tablet computer while listening to either no background sounds (silence), popular music playlists designed to increase focus (pre-recorded songs in a particular sequence), or engineered soundscapes that were personalized to individual listeners (digital audio composed in real-time based on input parameters such as heart rate, time of day, location, etc.). Sounds were delivered to participants through headphones while simultaneously their brain signals were recorded by a portable electroencephalography headband. Participants completed four one-hour long sessions at home during which different sound content played continuously. Using brain decoding technology, we obtained individual participant focus levels over time and used this data to analyze the effects of various properties of sound. We found that while participants were working, personalized soundscapes increased their focus significantly above silence (p=0.008), while music playlists did not have a significant effect. For the young adult demographic (18-36y), all sound content tested was significantly better than silence at producing focus (p=0.001-0.009). Personalized soundscapes increased focus the most relative to silence, but playlists of pre-recorded songs also increased focus significantly during specific time intervals. Ultimately we found that it is possible to accurately predict human focus levels that will be experienced in response to sounds a priori based on the sound’s physical properties. We then applied this finding to compare between music genres and revealed that classical music, engineered soundscapes, and natural sounds were the best genres for increasing focus, while pop and hip-hop were the worst. These insights can enable human and artificial intelligence composers to produce increases or decreases in listener focus with high temporal (millisecond) precision. Future research will include real-time adaptation of sound libraries for other functional objectives beyond affecting focus, such as affecting listener enjoyment, stress, and memory.
Publisher
Cold Spring Harbor Laboratory
Reference56 articles.
1. Decoding Attentional State to Faces and Scenes Using EEG Brainwaves;Complexity,2019
2. White noise enhances new-word learning in healthy adults;Scientific Reports,2017
3. Human stress classification using EEG signals in response to music tracks;Computers in Biology and Medicine,2019
4. Human emotion recognition and analysis in response to audio music using brain signals;Computers in Human Behavior,2016
5. Bird, J. J. , Ekart, A. , Buckingham, C. D. , & Faria, D. R. (2019). Mental emotional sentiment classification with an eeg-based brain-machine interface. Proceedings of TheInternational Conference on Digital Image and Signal Processing (DISP’19).
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献