Affiliation:
1. Department of Computer Graphics and Multimedia, Universiti Teknologi Malaysia
2. Department of Informatics, Faculty of Information Technology, Institut Teknologi Sepuluh Nopember Surabaya, Indonesia
Abstract
Interaction between humans and humanoid avatar representations is very important in virtual reality and robotics, since the humanoid avatar can represent either a human or a robot in a virtual environment. Many researchers have focused on providing natural interactions for humanoid avatars or even for robots with the use of camera tracking, gloves, giving them the ability to speak, brain interfaces and other devices. This paper provides a new multimodal interaction control for avatars by combining brain signals, facial muscle tension recognition and glove tracking to change the facial expression of humanoid avatars according to the user's emotional condition. The signals from brain activity and muscle movements are used as the emotional stimulator, while the glove acts as emotion intensity control for the avatar. This multimodal interface can determine when the humanoid avatar needs to change their facial expression or their walking power. The results show that humanoid avatar have different timelines of walking and facial expressions when the user stimulates them with different emotions. This finding is believed to provide new knowledge on controlling robots' and humanoid avatars' facial expressions and walking.
Subject
Artificial Intelligence,Computer Science Applications,Software
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献