Affiliation:
1. Orange Labs & Université Grenoble Alpes, Grenoble, France
2. Université Grenoble Alpes, Grenoble, France
3. Orange Labs, Grenoble, France
4. Université catholique de Louvain, Louvain-la-Neuve, Belgium
Abstract
This paper presents a model-based approach for designing Polymodal Menus, a new type of multimodal adaptive menu for small screen graphical user interfaces where item selection and adaptivity are responsive to more than one interaction modality: a menu item can be selected graphically, tactilely, vocally, gesturally, or any combination of them. The prediction window containing the most predicted menu items by assignment, equivalence, or redundancy is made equally adaptive. For this purpose, an adaptive menu model maintains the most predictable menu items according to various prediction methods. This model is exploited throughout various steps defined on a new Adaptivity Design Space based on a Perception-Decision-Action cycle com-ing from cognitive psychology. A user experiment compares four conditions of Polymodal Menus (graphical, vocal, gestural, and mixed) in terms of menu selection time, error rate, user subjective satisfaction and user preference, when item prediction has a low or high level of accuracy. Polymodal Menus offer alternative input/output modalities to select menu items in various contexts of use, especially when graphical modality is constrained.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Human-Computer Interaction,Social Sciences (miscellaneous)
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Exploring a Design Space of Graphical Adaptive Menus;ACM Transactions on Interactive Intelligent Systems;2020-03-31
2. AB4Web;Proceedings of the ACM on Human-Computer Interaction;2019-06-13
3. G-Menu: A Keyword-by-Gesture Based Dynamic Menu Interface for Smartphones;Human-Computer Interaction. Recognition and Interaction Technologies;2019