1. [1] L. Chen, H. Wei, and J. Ferryman, “A survey of human motion analysis using depth imagery,” Pattern Recogn. Lett., vol.34, no.15, pp.1995-2006, Nov. 2013.
2. [2] C.H Pham, Q.K. Le, and T.H. Le, “Human action recognition using dynamic time warping and voting algorithm,” VNU J. Science. Comp. Science & Com. Eng., vol.30, no.30, pp.22-30, 2014.
3. [3] B. Ni, Y. Pei, Z. Liang, L. Lin, and P. Moulin, “Integrating multi-stage depth-induced contextual information for human action recognition and localization,” 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp.1-8, 2013.
4. [4] P. Doliotis, A. Stefan, C. McMurrough, D. Eckhard, and V. Athitsos, “Comparing gesture recognition accuracy using color and depth information,” Proc. 4th International Conference on Pervasive Technologies Related to Assistive Environments — PETRA'11, pp.20-20, 2011.
5. [5] H.S. Cho, K.H. Jang, J.H. Han, and B.S. Kang, “A background removal scheme based on a running average for motion recognition using depth information,” 28th International Technical Conference on Circuit/Systems, Computers and Communications (ITC-CSCC 2013), pp.474-476, Yeosu, Korea, June 2013.