RBUE: a ReLU-based uncertainty estimation method for convolutional neural networks

Author:

Xia Yufeng,Zhang Jun,Gong Zhiqiang,Jiang Tingsong,Yao WenORCID

Abstract

AbstractConvolutional neural networks (CNNs) have successfully demonstrated their powerful predictive performance in a variety of tasks. However, it remains a challenge to estimate the uncertainty of these predictions simply and accurately. Deep Ensemble is widely considered the state-of-the-art method which can estimate the uncertainty accurately, but it is expensive to train and test. MC-Dropout is another popular method that is less costly but lacks the diversity of predictions resulting in less accurate uncertainty estimates. To combine the benefits of both, we introduce a ReLU-Based Uncertainty Estimation (RBUE) method. Instead of using the randomness of the Dropout module during the test phase (MC-Dropout) or using the randomness of the initial weights of CNNs (Deep Ensemble), RBUE uses the randomness of activation function to obtain diverse outputs in the testing phase to estimate uncertainty. Under the method, we propose strategy MC-DropReLU and develop strategy MC-RReLU. The uniform distribution of the activation function’s position in CNNs allows the randomness to be well transferred to the output results and gives a more diverse output, thus improving the accuracy of the uncertainty estimation. Moreover, our method is simple to implement and does not need to modify the existing model. We experimentally validate the RBUE on three widely used datasets, CIFAR10, CIFAR100, and TinyImageNet. The experiments demonstrate that our method has competitive performance but is more favorable in training time.

Publisher

Springer Science and Business Media LLC

Subject

Computational Mathematics,Engineering (miscellaneous),Information Systems,Artificial Intelligence

Reference48 articles.

1. Lakshminarayanan B, Pritzel A, Blundell C (2017) Simple and scalable predictive uncertainty estimation using deep ensembles. Adv Neural Inform Process Syst 30:6402–6413

2. Gal Y, Ghahramani Z (2016) Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In: international conference on machine learning, pp 1050–1059. PMLR

3. Robbins H, Monro S (1951) A stochastic approximation method. Ann Math Stat, pp 400–407

4. Ovadia Y, Fertig E, Ren J, Nado Z, Sculley D, Nowozin S, Dillon J, Lakshminarayanan B, Snoek J (2019) Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. In: Advances in Neural Information Processing Systems, pp 13991–14002

5. Gustafsson FK, Danelljan M, Schon TB (2020) Evaluating scalable bayesian deep learning methods for robust computer vision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp 318–319

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3