Combining Contrastive Learning with Auto-Encoder for Out-of-Distribution Detection
-
Published:2023-12-03
Issue:23
Volume:13
Page:12930
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Luo Dawei1ORCID, Zhou Heng2ORCID, Bae Joonsoo1, Yun Bom3
Affiliation:
1. Department of Industrial and Information Systems Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea 2. Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea 3. Korean Construction Equipment Technology Institute, Gunsan 10203, Republic of Korea
Abstract
Reliability and robustness are fundamental requisites for the successful integration of deep-learning models into real-world applications. Deployed models must exhibit an awareness of their limitations, necessitating the ability to discern out-of-distribution (OOD) data and prompt human intervention, a critical competency. While several frameworks for OOD detection have been introduced and achieved remarkable results, most state-of-the-art (SOTA) models rely on supervised learning with annotated data for their training. However, acquiring labeled data can be a demanding, time-consuming or, in some cases, an infeasible task. Consequently, unsupervised learning has gained substantial traction and has made noteworthy advancements. It empowers models to undergo training solely on unlabeled data while still achieving comparable or even superior performance compared to supervised alternatives. Among the array of unsupervised methods, contrastive learning has asserted its effectiveness in feature extraction for a variety of downstream tasks. Conversely, auto-encoders are extensively employed to acquire indispensable representations that faithfully reconstruct input data. In this study, we introduce a novel approach that amalgamates contrastive learning with auto-encoders for OOD detection using unlabeled data. Contrastive learning diligently tightens the grouping of in-distribution data while meticulously segregating OOD data, and the auto-encoder augments the feature space with increased refinement. Within this framework, data undergo implicit classification into in-distribution and OOD categories with a notable degree of precision. Our experimental findings manifest that this method surpasses most of the existing detectors reliant on unlabeled data or even labeled data. By incorporating an auto-encoder into an unsupervised learning framework and training it on the CIFAR-100 dataset, our model enhances the detection rate of unsupervised learning methods by an average of 5.8%. Moreover, it outperforms the supervised-based OOD detector by an average margin of 11%.
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference68 articles.
1. Zhang, A., Lipton, Z.C., Li, M., and Smola, A.J. (2021). Dive into deep learning. arXiv. 2. Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., and Sabokrou, M. (2022). A Unified Survey on Anomaly, Novelty, Open-Set, and Out of-Distribution Detection: Solutions and Future Challenges. arXiv. 3. Zhou, D.W., Ye, H.J., and Zhan, D.C. (2021, January 20–25). Learning placeholders for open-set recognition. Proceedings of the IEEE/CVF Conference on Computer Vision And Pattern Recognition, Nashville, TN, USA. 4. Procedures for detecting outlying observations in samples;Grubbs;Technometrics,1969 5. Vaze, S., Han, K., Vedaldi, A., and Zisserman, A. (2021, January 3–7). Open-Set Recognition: A Good Closed-Set Classifier is All You Need. Proceedings of the International Conference on Learning Representations, Virtual.
|
|