1. [1] K. Nakadai, T. Lourens, H.G. Okuno and H. Kitano: “Active audition for humanoid,” Proc. of 17th National Conference on Artificial Intelligence (AAAI-2000), pp.832–839, 2000.
2. [2] S. Yamamoto, K. Nakadai, M. Nakano, H. Tsujino, J.M. Valin, K. Komatani, T. Ogata and H.G. Okuno: “Real-time robot audition system that recognizes simultaneous speech in the real world,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006), pp.5333–5338, 2006.
3. [3] J.G. Fiscus: “A post-processing systems to yield reduced word error rates: Recognizer output voting error reduction (ROVER),” Proc. of the Workshop on Automatic Speech Recognition and Understanding (ASRU-97), pp.347–354, 1997.
4. [4] G. Potamianos, C. Neti, G. Iyengar, A.W. Senior and A Verma: “A cascade visual front end for speaker independent automatic speechreading,” International Journal of Speech Technology, Special Issue on Multimedia, vol.4, no.3–4, pp.193–208, 2001.
5. [5] S. Tamura, K. Iwano and S. Furui: “A stream-weight optimization method for multi-stream HMMs based on likelihood value normalization,” Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-05), pp.SP–P5.2, 2005.