Combining Contrastive Learning with Auto-Encoder for Out-of-Distribution Detection

Author:

Luo Dawei1ORCID,Zhou Heng2ORCID,Bae Joonsoo1,Yun Bom3

Affiliation:

1. Department of Industrial and Information Systems Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea

2. Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea

3. Korean Construction Equipment Technology Institute, Gunsan 10203, Republic of Korea

Abstract

Reliability and robustness are fundamental requisites for the successful integration of deep-learning models into real-world applications. Deployed models must exhibit an awareness of their limitations, necessitating the ability to discern out-of-distribution (OOD) data and prompt human intervention, a critical competency. While several frameworks for OOD detection have been introduced and achieved remarkable results, most state-of-the-art (SOTA) models rely on supervised learning with annotated data for their training. However, acquiring labeled data can be a demanding, time-consuming or, in some cases, an infeasible task. Consequently, unsupervised learning has gained substantial traction and has made noteworthy advancements. It empowers models to undergo training solely on unlabeled data while still achieving comparable or even superior performance compared to supervised alternatives. Among the array of unsupervised methods, contrastive learning has asserted its effectiveness in feature extraction for a variety of downstream tasks. Conversely, auto-encoders are extensively employed to acquire indispensable representations that faithfully reconstruct input data. In this study, we introduce a novel approach that amalgamates contrastive learning with auto-encoders for OOD detection using unlabeled data. Contrastive learning diligently tightens the grouping of in-distribution data while meticulously segregating OOD data, and the auto-encoder augments the feature space with increased refinement. Within this framework, data undergo implicit classification into in-distribution and OOD categories with a notable degree of precision. Our experimental findings manifest that this method surpasses most of the existing detectors reliant on unlabeled data or even labeled data. By incorporating an auto-encoder into an unsupervised learning framework and training it on the CIFAR-100 dataset, our model enhances the detection rate of unsupervised learning methods by an average of 5.8%. Moreover, it outperforms the supervised-based OOD detector by an average margin of 11%.

Publisher

MDPI AG

Subject

Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science

Reference68 articles.

1. Zhang, A., Lipton, Z.C., Li, M., and Smola, A.J. (2021). Dive into deep learning. arXiv.

2. Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., and Sabokrou, M. (2022). A Unified Survey on Anomaly, Novelty, Open-Set, and Out of-Distribution Detection: Solutions and Future Challenges. arXiv.

3. Zhou, D.W., Ye, H.J., and Zhan, D.C. (2021, January 20–25). Learning placeholders for open-set recognition. Proceedings of the IEEE/CVF Conference on Computer Vision And Pattern Recognition, Nashville, TN, USA.

4. Procedures for detecting outlying observations in samples;Grubbs;Technometrics,1969

5. Vaze, S., Han, K., Vedaldi, A., and Zisserman, A. (2021, January 3–7). Open-Set Recognition: A Good Closed-Set Classifier is All You Need. Proceedings of the International Conference on Learning Representations, Virtual.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3