Abstract
In sign languages, communication relies on hand gestures, facial expressions, and body language, with signs varying significantly based on the position and movement of different body parts. These variations present challenges to tasks like sentiment analysis, where the direct translation of hand gestures alone is insufficient. In this study, we introduce a novel approach to sentiment analysis in Turkish Sign Language (TİD), marking the first time in the literature that both hand gestures and facial expressions have been incorporated for this purpose. We developed and fine-tuned customized models for emotion extraction from facial expressions using the RAF-DB dataset, and for sentiment analysis from hand gestures using the AUTSL dataset. Additionally, we compiled a dataset of sign language videos enhanced with facial expressions for testing. Our findings indicate that facial expressions are more critical for sentiment analysis in sign language than hand gestures alone. However, integrating both modalities resulted in even greater performance enhancements.