1. Gaschler, A., Jentzsch, S., Giuliani, M., Huth, K., de Ruiter, J., Knoll, A.: Social behavior recognition using body posture and head pose for human-robot interaction. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2128–2133. IEEE, Vilamoura-Algarve, Portugal (2012)
2. Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., Ng, A.Y.: Multimodal deep learning. In: 28th International Conference on Machine Learning (ICML), pp. 689–696 (2011)
3. Khan, S., Tunçer, B.: Gesture and speech elicitation for 3D CAD modeling in conceptual design. Autom. Constr. 106, 102847 (2019)
4. Vuletic, T., Duffy, A., Hay, L., McTeague, C., Campbell, G., Grealy, M.: Systematic literature review of hand gestures used in human computer interaction interfaces. Int. J. Hum. Comput. Stud. 129, 74–94 (2019)
5. Laviola, J.J.R., Zeleznik, R.C.: Flex and pinch: a case study of whole hand input design for virtual environment interaction. In: Second IASTED International Conference on Computer Graphics and Imaging, Innsbruck, Austria, pp. 221–225 (1999)