1. [1] L. Smidl, A. Chylek, and J. Svec, “A Multimodal dialogue system for air traffic control trainees based on discrete-event simulation,” Proc. Interspeech2016, San Francisco, USA, pp.379-380, 2016.
2. [2] A. Maier, J. Hough, and D. Schlangen, “Towards deep end-of-turn prediction for situated spoken dialogue systems,” Proc. Interspeech2017, Stockholm, Sweden, pp.1676-1680, 2017. 10.21437/interspeech.2017-1593
3. [3] M. Li, Z. He, and J. Wu, “Target-based state and tracking algorithm for spoken dialogue system,” Proc. Interspeech2016, San Francisco, USA, pp.2711-2715, 2016. 10.21437/interspeech.2016-800
4. [4] C. Liu, P. Xu, and R. Sarikaya, “Deep contextual language understanding in spoken dialogue systems,” Proc. Interspeech2015, Dresden, Germany, pp.120-124, 2015. 10.21437/interspeech.2015-39
5. [5] P.-H. Su, D. Vandyke, M. Gasic, D. Kim, N. Mrksic, T.-H. Wen, and S. Young, “Learning from real users: rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems,” Proc. Interspeech2015, Dresden, Germany, pp.2007-2011, 2015. 10.21437/interspeech.2015-456