Author:
Li Huakang, ,Huang Jie,Guo Minyi,Zhao Qunfei
Abstract
Mobile robots communicating with people would benefit from being able to detect sound sources to help localize interesting events in real-life settings. We propose using a spherical robot with four microphones to determine the spatial locations of multiple sound sources in ordinary rooms. The arrival temporal disparities from phase difference histograms are used to calculate the time differences. A precedence effect model suppresses the influence of echoes in reverberant environments. To integrate spatial cues of different microphones, we map the correlation between different microphone pairs on a 3D map corresponding to the azimuth and elevation of sound source direction. Results of experiments indicate that our proposed system provides sound source distribution very clearly and precisely, even concurrently in reverberant environments with the Echo Avoidance (EA) model.
Publisher
Fuji Technology Press Ltd.
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Human-Computer Interaction
Reference26 articles.
1. G. Medioni and S. B. Kang, “Emerging topics in computer vision,” Prentice Hall PTR Upper Saddle River, NJ, USA, 2004.
2. J. Huang, C. Zhao, Y. Ohtake, H. Li, and Q. Zhao, “Robot Position Identification Using Specially Designed Landmarks,” In Instrumentation and Measurement Technology Conference, 2006, IMTC 2006, Proc. of the IEEE, pp. 2091-2094, 2006.
3. R. S. Heffner and H. E. Heffner, “Evolution of sound localization in mammals,” The evolutionary biology of hearing, pp. 691-715, 1992.
4. J. Huang, N. Ohnishi, and N. Sugie, “Building ears for robots: sound localization and separation,” Artificial Life and Robotics, Vol.1, No.4, pp. 157-163, 1997.
5. P. Arabi and S. Zaky, “Integrated vision and sound localization,” In Information Fusion, 2000, FUSION 2000, Proc. of the Third Int. Conf. on, Vol.2, 2000.