Abstract
Multimodal human–computer interaction has been sought to provide not only more compelling interactive experiences, but also more accessible interfaces to mobile devices. With the advance in mobile technology and in affordable sensors, multimodal research that leverages and combines multiple interaction modalities (such as speech, touch, vision, and gesture) has become more and more prominent. This article provides a framework for the key aspects in mid-air gesture and speech-based interaction for older adults. It explores the literature on multimodal interaction and older adults as technology users and summarises the main findings for this type of users. Building on these findings, a number of crucial factors to take into consideration when designing multimodal mobile technology for older adults are described. The aim of this work is to promote the usefulness and potential of multimodal technologies based on mid-air gestures and voice input for making older adults' interaction with mobile devices more accessible and inclusive.
Reference118 articles.
1. Aigner, R., Wigdor, D., Benko, H., Haller, M., Lindbauer, D., Ion, A., & Koh, J. T. K. V. (2012). Understanding mid-air hand gestures: A study of human preferences in usage of gesture types for HCI. Microsoft Research.
2. CrossY: a crossing based drawing application.;G.Apitz;Proc. UIST 2004,2004
3. Tabletop sharing of digital photographs for the elderly
4. MultiPoint: Comparing laser and manual pointing as remote input in large display interactions
5. Learning to use new technologies by older adults: Perceived difficulties, experimentation behaviour and usability