Abstract
Handing over objects is a collaborative task that requires participants to synchronize their actions in terms of space and time, as well as their adherence to social standards. If one participant is a social robot and the other a visually impaired human, actions should favorably be coordinated by voice. User requirements for such a Voice-User Interface (VUI), as well as its required structure and content, are unknown so far. In our study, we applied the user-centered design process to develop a VUI for visually impaired humans and humans with full sight. Iterative development was conducted with interviews, workshops, and user tests to derive VUI requirements, dialog structure, and content. A final VUI prototype was evaluated in a standardized experiment with 60 subjects who were visually impaired or fully sighted. Results show that the VUI enabled all subjects to successfully receive objects with an error rate of only 1.8%. Likeability and accuracy were evaluated best, while habitability and speed of interaction were shown to need improvement. Qualitative feedback supported and detailed results, e.g., how to shorten some dialogs. To conclude, we recommend that inclusive VUI design for social robots should give precise information for handover processes and pay attention to social manners.
Subject
Artificial Intelligence,Control and Optimization,Mechanical Engineering
Reference38 articles.
1. Robotic Workmates: Hybrid Human-Robot-Teams in the Industry 4.0;Richert;Proceedings of the International Conference on E-Learning,2016
2. Design of a Robotic Workmate
3. Social Robotics
4. Using spatial and temporal contrast for fluent robot-human hand-overs;Cakmak;Proceedings of the 6th International Conference on Human-Robot Interaction,2011
5. An Affordance Sensitive System for Robot to Human Object Handover
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献