Multimodal Interaction for Cobot Using MQTT
-
Published:2023-08-03
Issue:8
Volume:7
Page:78
-
ISSN:2414-4088
-
Container-title:Multimodal Technologies and Interaction
-
language:en
-
Short-container-title:MTI
Author:
Rouillard José1, Vannobel Jean-Marc1
Affiliation:
1. CNRS, Centrale Lille, University of Lille, UMR 9189 CRIStAL, F-59000 Lille, France
Abstract
For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and to integrate possible connected objects, it is necessary to have an effective and secure means of communication between the different parts of the system. This is even more important with the use of a collaborative robot (cobot) sharing the same space and very close to the human during their tasks. This study present research work in the field of multimodal interaction for a cobot using the MQTT protocol, in virtual (Webots) and real worlds (ESP microcontrollers, Arduino, IOT2040). We show how MQTT can be used efficiently, with a common publish/subscribe mechanism for several entities of the system, in order to interact with connected objects (like LEDs and conveyor belts), robotic arms (like the Ned Niryo), or mobile robots. We compare the use of MQTT with that of the Firebase Realtime Database used in several of our previous research works. We show how a “pick–wait–choose–and place” task can be carried out jointly by a cobot and a human, and what this implies in terms of communication and ergonomic rules, via health or industrial concerns (people with disabilities, and teleoperation).
Subject
Computer Networks and Communications,Computer Science Applications,Human-Computer Interaction,Neuroscience (miscellaneous)
Reference46 articles.
1. “Put-that-there”: Voice and gesture at the graphics interface;Bolt;ACM SIGGRAPH Comput. Graph.,1980 2. Guedira, Y., and Rouillard, J. (2021). Universal Access in Human-Computer Interaction. Access to Media, Learning and Assistive Environments, Proceedings of the 15th International Conference, UAHCI 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, 24–29 July 2021, Springer. Proceedings, Part II. 3. Rouillard, J., Vannobel, J.-M., and Bekaert, M.-H. (2023, January 23–24). BIOFEE: Biomedical Framework for Enhanced Experimentation. Proceedings of the 14th International Conference on Applied Human Factors and Ergonomics, San Francisco, CA, USA. 4. Filist, S.A., Al-Kasasbeh, R.T., Shatalova, O.V., Aikeyeva, A.A., Al-Habahbeh, O.M., Alshamasin, M.S., Alekseevich, K.N., Khrisat, M., Myasnyankin, M.B., and Ilyash, M. (2022). Classifier for the functional state of the respiratory system via descriptors determined by using multimodal technology. Comput. Methods Biomech. Biomed. Eng., 1–19. 5. World Health Organization (2023, June 28). WHO|World Health Organization. Available online: https://www.who.int/.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|