Affiliation:
1. Universidade Aberta, Lisbon, Portugal
2. CIAC, Universidade Aberta, Lisbon, Portugal
Abstract
Gathering and examining progressively multi-modular sensor information of human faces is a critical issue in PC vision, with applications in examinations, entertainment, and security. However, due to the exigent nature of the problem, there is a lack of affordable and easy-to-use systems, with real-time, annotations capability, 3D analysis, replay capability and with a frame speed capable of detecting facial patterns in working behavior environments. In the context of an ongoing effort to develop tools to support the monitoring and evaluation of the human affective state in working environments, the authors investigate the applicability of a facial analysis approach to map and evaluate human facial patterns. The challenge is to interpret this multi-modal sensor data to classify it with deep learning algorithms and fulfill the following requirements: annotations capability, 3D analysis, and replay capability. In addition, the authors want to be able to continuously enhance the output result of the system with a training process in order to improve and evaluate different patterns of the human face.
Reference20 articles.
1. Rapid and accurate face depth estimation in passive stereo systems
2. Alabbasi, H. A., Moldoveanu, P., & Moldoveanu, A. (2015). Real Time Facial Emotion Recognition using Kinect V2 Sensor. IOSR Journal of Computer Engineering, 17(3), 61-68.
3. Corrêa, D. C., Salvadeo, D. H., Levada, A. L., & Saito, J. H. (2008). Using LSTM Network in Face Classification Problems.
4. Emote aloud during learning with AutoTutor: Applying the Facial Action Coding System to cognitive–affective states during learning
5. Daily Motion. (n.d.). Videos: Prototype videos. Retrieved from http://www.dailymotion.com/playlist/x57hzi