Author:
Boer Yoshiven,Valencia Lianca,Prasetyo Simeon Yuda
Abstract
Emotion detection from people’s facial expressions is important nowadays to know how other humans feel, such as the interaction of an AI machine with humans, which is popular. One of them is an AI avatar. Sometimes these machines do not know what the human partner feels, so their decisions can be inaccurate. Here, AI avatars can be used to monitor human partners’ healthcare conditions, such as stress, depression, and anxiety that can cause suicidal death. This research aims to get the best model to detect emotion from facial expressions by comparing some DCNN pre-trained models. The pre-trained DCNN models that are used in this research are VGG16, VGG19, ResNet50, ResNet101, Xception, and InceptionV3. This research used accuracy, precision, recall, and f-1 score to evaluate all models. The result shows that the VGG19 model has the highest accuracy than other models, which is 65%. The research can conclude that the performance of a model is dependent on various factors, such as the size and quality of the dataset used for the research, the complexity of the problem that needs to be achieved, and the hyperparameters used for the dataset, while training.