Abstract
AbstractNeural-network classifiers achieve high accuracy when predicting the class of an input that they were trained to identify. Maintaining this accuracy in dynamic environments, where inputs frequently fall outside the fixed set of initially known classes, remains a challenge. We consider the problem of monitoring the classification decisions of neural networks in the presence of novel classes. For this purpose, we generalize our recently proposed abstraction-based monitor from binary output to real-valued quantitative output. This quantitative output enables new applications, two of which we investigate in the paper. As our first application, we introduce an algorithmic framework for active monitoring of a neural network, which allows us to learn new classes dynamically and yet maintain high monitoring performance. As our second application, we present an offline procedure to retrain the neural network to improve the monitor’s detection performance without deteriorating the network’s classification accuracy. Our experimental evaluation demonstrates both the benefits of our active monitoring framework in dynamic scenarios and the effectiveness of the retraining procedure.
Publisher
Springer Science and Business Media LLC
Subject
Information Systems,Software
Reference54 articles.
1. Bendale, A., Boult, T.E.: Towards open world recognition. In: CVPR, pp. 1893–1902. IEEE Comput. Soc., Los Alamitos (2015). https://doi.org/10.1109/CVPR.2015.7298799
2. Bendale, A., Boult, T.E.: Towards open set deep networks. In: CVPR, pp. 1563–1572. IEEE Comput. Soc., Los Alamitos (2016). https://doi.org/10.1109/CVPR.2016.173
3. Bendre, N., Terashima-Marín, H., Najafirad, P.: Learning from few samples: a survey (2020). https://arxiv.org/abs/2007.15484. arXiv:2007.15484. CoRR
4. Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: COMPSTAT, pp. 177–186. Physica-Verlag, Heidelberg (2010). https://doi.org/10.1007/978-3-7908-2604-3_16
5. Chen, Y., Cheng, C., Yan, J., et al.: Monitoring object detection abnormalities via data-label and post-algorithm abstractions (2021). https://arxiv.org/abs/2103.15456. arXiv:2103.15456. CoRR
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献