Author:
Wu Wei,Zhang Yuan,Li Yunpeng,Li Chuanyang
Abstract
<abstract>
<p>Biometric authentication prevents losses from identity misuse in the artificial intelligence (AI) era. The fusion method integrates palmprint and palm vein features, leveraging their stability and security and enhances counterfeiting prevention and overall system efficiency through multimodal correlations. However, most of the existing multi-modal palmprint and palm vein feature extraction methods extract only feature information independently from different modalities, ignoring the importance of the correlation between different modal samples in the class to the improvement of recognition performance. In this study, we addressed the aforementioned issues by proposing a feature-level joint learning fusion approach for palmprint and palm vein recognition based on modal correlations. The method employs a sparse unsupervised projection algorithm with a "purification matrix" constraint to enhance consistency in intra-modal features. This minimizes data reconstruction errors, eliminating noise and extracting compact, and discriminative representations. Subsequently, the partial least squares algorithm extracts high grayscale variance and category correlation subspaces from each modality. A weighted sum is then utilized to dynamically optimize the contribution of each modality for effective classification recognition. Experimental evaluations conducted for five multimodal databases, composed of six unimodal databases including the Chinese Academy of Sciences multispectral palmprint and palm vein databases, yielded equal error rates (EER) of 0.0173%, 0.0192%, 0.0059%, 0.0010%, and 0.0008%. Compared to some classical methods for palmprint and palm vein fusion recognition, the algorithm significantly improves recognition performance. The algorithm is suitable for identity recognition in scenarios with high security requirements and holds practical value.</p>
</abstract>
Publisher
American Institute of Mathematical Sciences (AIMS)
Reference30 articles.
1. S. P. Zhao, L. K. Fei, J. Wen, Multiview-learning-based generic palmprint recognition: A literature review, Mathematics, 11 (2023), 1261–1261. https://doi.org/10.3390/math11051261
2. S. Y. Li, B. Zhang, Joint discriminative sparse coding for robust hand-based multimodal recognition, IEEE Trans. Inf. Forensics and Secur., 16 (2021), 3186–3198. https://doi.org/10.1109/TIFS.2021.3074315
3. K. Zhang, H. Wang, C. Yu, M. Du, L. Tao, Class constraint-based discriminative features learning algorithm for palm print and palm vein fusion recognition, in 2022 7th International Conference on Signal and Image Processing (ICSIP), (2022), 275–280. https://doi.org/10.1109/ICSIP55141.2022.9886189
4. H. S. Kala, S. Kumar, R. B. Reddy, N. Shastry, R. Thakur, Contactless authentication device using palm vein and palm print fusion biometric technology for post covid world, in 2021 International Conference on Design Innovations for 3Cs Compute Communicate Control (ICDI3C), (2021), 281–285. https://doi.org/10.1109/ICDI3C53598.2021.00063
5. T. R. Yashavanth, M. Suresh, Performance analysis of multimodal biometric system using LBP and PCA, in 2023 International Conference on Recent Trends in Electronics and Communication (ICRTEC), (2023), 1–5. https://doi.org/10.1109/ICRTEC56977.2023.10111925