Improving the Performance of Open-Set Recognition with Generated Fake Data
-
Published:2023-03-09
Issue:6
Volume:12
Page:1311
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Halász András Pál1ORCID, Al Hemeary Nawar1ORCID, Daubner Lóránt Szabolcs1ORCID, Zsedrovits Tamás1ORCID, Tornai Kálmán1ORCID
Affiliation:
1. Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, 1083 Budapest, Hungary
Abstract
Open-set recognition models, in addition to generalizing to unseen instances of known categories, have to identify samples of unknown classes during the training phase. The main reason the latter is much more complicated is that there is very little or no information about the properties of these unknown classes. There are methodologies available to handle the unknowns. One possible method is to construct models for them by using generated inputs labeled as unknown. Generative adversarial networks are frequently deployed to generate synthetic samples representing unknown classes to create better models for known classes. In this paper, we introduce a novel approach to improve the accuracy of recognition methods while reducing the time complexity. Instead of generating synthetic input data to train neural networks, feature vectors are generated using the output of a hidden layer. This approach results in a less complex structure for the neural network representation of the classes. A distance-based classifier implemented by a convolutional neural network is used in our implementation. Our solution’s open-set detection performance reaches an AUC value of 0.839 on the CIFAR-10 dataset, while the closed-set accuracy is 91.4%, the highest among the open-set recognition methods. The generator and discriminator networks are much smaller when generating synthetic inner features. There is no need to run these samples through the first part of the classifier with the convolutional layers. Hence, this solution not only gives better performance than generating samples in the input space but also makes it less expensive in terms of computational complexity.
Funder
ational Research, Development, and Innovation Office
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference29 articles.
1. Bodesheim, P., Freytag, A., Rodner, E., and Denzler, J. (2015, January 5–9). Local Novelty Detection in Multi-class Recognition Problems. Proceedings of the 2015 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA. 2. Regularization of Neural Networks using DropConnect;Dasgupta;Proceedings of the 30th International Conference on Machine Learning,2013 3. Nguyen, A., Yosinski, J., and Clune, J. (2015, January 7–12). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA. 4. Fumera, G., and Roli, F. (2002). Pattern Recognition with Support Vector Machines: First International Workshop, SVM 2002 Niagara Falls, Canada, August 10, 2002 Proceedings, Springer. 5. Support Vector Machines with a Reject Option;Grandvalet;Adv. Neural Inf. Process. Syst.,2008
|
|