Uncalibrated Eye Gaze Estimation using SE-ResNext with Unconstrained Head Movement and Ambient Light Change

Author:

Fatahipour H.1,Mosavi Mohammad Reza1ORCID,Fariborz J.1

Affiliation:

1. Iran University of Science and Technology

Abstract

Abstract Technological advances in smartphones, tablets, computer games, virtual reality, metaverse, and other fields have made gaze estimation (GE) using standard hardware more necessary than ever before. It can also be used in other areas such as psychology, increased driving safety, and advertisement. This paper proposes a structure based on convolutional neural networks (CNNs). In this structure, several well-known CNNs are implemented and trained with a section of the GazeCapture dataset for acceleration. The SE-ResNext network, which has the best results in initial training, is selected in the end. The test error for the designated structure is 1.32 cm in training with the entire dataset. The ambient light is an effective factor in GE accuracy. It clearly affects different GE methods. The dataset is divided into low-light and bright-light environment sets to find a solution. The bright-light environment samples are much more abundant than the low-light ones, something which causes a bias in gaze estimator training. Therefore, standard data augmentation methods are employed to increase the number of low-light samples and retrain the gaze estimator. As a result, the GE error is reduced from 1.20 to 1.06 cm for bright-light environments and from 3.39 to 1.87 cm for low-light environments. To examine resistance of the gaze estimator to head movement, the test dataset is manually and intuitively classified into five subsets based on head positions. In this classification, test errors of 1.27, 1.427, 1.496, 1.952, and 2.466 cm are respectively obtained for the frontal, roll to right, roll to left, yaw to right, and yaw to left head positions.

Publisher

Research Square Platform LLC

Reference33 articles.

1. Episode-based personalization network for gaze estimation without calibration;Zhao X;Neurocomputing,2022

2. Wang, X., Zhang, J., Zhang, H., Zhao, S., & Liu, H. (2021). “Vision-based gaze estimation: A Review,” IEEE Trans. Cogn. Dev. Syst., vol. 8920, no. c, pp. 1–19,

3. A human-machine interface based on eye tracking for controlling and monitoring a smart home using the internet of things;Bissoli A;Sensors (Basel, Switzerland),2019

4. Saran, A., Majumdar, S., Short, E. S., Thomaz, A., & Niekum, S. (2018). “Human gaze following for human-robot interaction,” IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), pp. 8615–8621,

5. Chang, Z. (2020). Appearance-based gaze estimation and applications in healthcare. Duke University.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3