OcularSeg: Accurate and Efficient Multi-Modal Ocular Segmentation in Non-Constrained Scenarios
-
Published:2024-05-17
Issue:10
Volume:13
Page:1967
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Zhang Yixin12, Wang Caiyong12ORCID, Li Haiqing12ORCID, Sun Xianyun12ORCID, Tian Qichuan12, Zhao Guangzhe12ORCID
Affiliation:
1. School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing 100044, China 2. Beijing Key Laboratory of Robot Bionics and Function Research, Beijing 100044, China
Abstract
Multi-modal ocular biometrics has recently garnered significant attention due to its potential in enhancing the security and reliability of biometric identification systems in non-constrained scenarios. However, accurately and efficiently segmenting multi-modal ocular traits (periocular, sclera, iris, and pupil) remains challenging due to noise interference or environmental changes, such as specular reflection, gaze deviation, blur, occlusions from eyelid/eyelash/glasses, and illumination/spectrum/sensor variations. To address these challenges, we propose OcularSeg, a densely connected encoder–decoder model incorporating eye shape prior. The model utilizes Efficientnetv2 as a lightweight backbone in the encoder for extracting multi-level visual features while minimizing network parameters. Moreover, we introduce the Expectation–Maximization attention (EMA) unit to progressively refine the model’s attention and roughly aggregate features from each ocular modality. In the decoder, we design a bottom-up dense subtraction module (DSM) to amplify information disparity between encoder layers, facilitating the acquisition of high-level semantic detailed features at varying scales, thereby enhancing the precision of detailed ocular region prediction. Additionally, boundary- and semantic-guided eye shape priors are integrated as auxiliary supervision during training to optimize the position, shape, and internal topological structure of segmentation results. Due to the scarcity of datasets with multi-modal ocular segmentation annotations, we manually annotated three challenging eye datasets captured in near-infrared and visible light scenarios. Experimental results on newly annotated and existing datasets demonstrate that our model achieves state-of-the-art performance in intra- and cross-dataset scenarios while maintaining efficient execution.
Funder
National Natural Science Foundation of China Beijing Natural Science Foundation Young Elite Scientist Sponsorship Program by BAST Pyramid Talent Training Project of BUCEA
Reference57 articles.
1. Deep Learning for Iris Recognition: A Survey;Nguyen;ACM Comput. Surv.,2024 2. Evangeline, D., Parkavi, A., Bhutaki, R., Jhawar, S., and Pujitha, M.S. (2024, January 24–25). Person Identification using Periocular Region. Proceedings of the 2024 International Conference on Intelligent and Innovative Technologies in Computing, Electrical and Electronics (IITCEE), Bangalore, India. 3. Li, H., Wang, C., Zhao, G., He, Z., Wang, Y., and Sun, Z. (2023, January 25–28). Sclera-TransFuse: Fusing Swin Transformer and CNN for Accurate Sclera Segmentation. Proceedings of the 2023 IEEE International Joint Conference on Biometrics (IJCB), Ljubljana, Slovenia. 4. Ocular biometrics: A survey of modalities and fusion approaches;Nigam;Inf. Fusion,2015 5. Person identification using fusion of iris and periocular deep features;Umer;Neural Netw.,2020
|
|