Advancing Virtual Interviews: AI-Driven Facial Emotion Recognition for Better Recruitment

Author:

Mehta Rohini,Sai Pravalika Pulicharla,Naga Durga Sai Bellamkonda Venkata,Kumar P Bharath,Bhattacharyya Ritendu,Kumar Depuru Bharani

Abstract

Behavior analysis involves the detailed process of identifying, modeling, and comprehending the various nuances and patterns of emotional expressions exhibited by individuals. It poses a significant challenge to accurately detect and predict facial emotions, especially in contexts like remote interviews, which have become increasingly prevalent. Notably, many participants struggle to convey their thoughts to interviewers with a happy expression and good posture, which may unfairly diminish their chances of employment, despite their qualifications. To address this challenge, artificial intelligence techniques such as image classification offer promising solutions. By leveraging AI models, behavior analysis can be applied to perceive and interpret facial reactions, thereby paving the way to anticipate future behaviors based on learned patterns to the participants. Despite existing works on facial emotion recognition (FER) using image classification, there is limited research focused on platforms like remote interviews and online courses. In this paper, our primary focus lies on emotions such as happiness, sadness, anger, surprise, eye contact, neutrality, smile, confusion, and stooped posture. We have curated our dataset, comprising a diverse range of sample interviews captured through participants' video recordings and other images documenting facial expressions and speech during interviews. Additionally, we have integrated existing datasets such as FER 2013 and the Celebrity Emotions dataset. Through our investigation, we explore a variety of AI and deep learning methodologies, including VGG19, ResNet50V2, ResNet152V2, Inception-ResNetV2, Xception, EfficientNet B0, and YOLO V8 to analyze facial patterns and predict emotions. Our results demonstrate an accuracy of 73% using the YOLO v8 model. However, we discovered that the categories of happy and smile, as well as surprised and confused, are not disjoint, leading to potential inaccuracies in classification. Furthermore, we considered stooped posture as a non-essential class since the interviews are conducted via webcam, which does not allow for the observation of posture. By removing these overlapping categories, we achieved a remarkable accuracy increase to around 76.88% using the YOLO v8 model.

Publisher

International Journal of Innovative Science and Research Technology

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Hand Gesture Recognition Using Deep Learning;International Journal of Innovative Science and Research Technology (IJISRT);2024-08-13

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3