Abstract
AbstractWeb image datasets curated online inherently contain ambiguous in-distribution instances and out-of-distribution instances, which we collectively callnon-conforming(NC) instances. In many recent approaches for mitigating the negative effects of NC instances, the core implicit assumption is that the NC instances can be found via entropy maximization. For “entropy” to be well-defined, we are interpreting the output prediction vector of an instance as the parameter vector of a multinomial random variable, with respect to some trained model with a softmax output layer. Hence, entropy maximization is based on the idealized assumption that NC instances have predictions that are “almost” uniformly distributed. However, in real-world web image datasets, there are numerous NC instances whose predictions are far from being uniformly distributed. To tackle the limitation of entropy maximization, we propose$$(\alpha , \beta )$$(α,β)-generalized KL divergence,$${\mathcal {D}}_{\text {KL}}^{\alpha , \beta }(p\Vert q)$$DKLα,β(p‖q), which can be used to identify significantly more NC instances. Theoretical properties of$${\mathcal {D}}_{\text {KL}}^{\alpha , \beta }(p\Vert q)$$DKLα,β(p‖q)are proven, and we also show empirically that a simple use of$${\mathcal {D}}_{\text {KL}}^{\alpha , \beta }(p\Vert q)$$DKLα,β(p‖q)outperforms all baselines on the NC instance identification task. Building upon$$(\alpha ,\beta )$$(α,β)-generalized KL divergence, we also introduce a new iterative training framework,GenKL, that identifies and relabels NC instances. When evaluated on three web image datasets, Clothing1M, Food101/Food101N, and mini WebVision 1.0, we achieved new state-of-the-art classification accuracies:$$81.34\%$$81.34%,$$85.73\%$$85.73%and$$78.99\%$$78.99%/$$92.54\%$$92.54%(top-1/top-5), respectively.
Funder
National Research Foundation Singapore
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Reference72 articles.
1. Albert, P., Ortego, D., Arazo, E., O’Connor, N.E., & McGuinness, K. (2022) Addressing out-of-distribution label noise in Webly–Labelled data. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 392–401).
2. Arpit, D., Jastrzębski, S., Ballas, N., Krueger, D., Bengio, E., Kanwal, M.S., Maharaj, T., Fischer, A., Courville, A., & Bengio, Y. (2017). A closer look at memorization in deep networks. In International conference on machine learning (pp. 233–242). PMLR.
3. Bossard, L., Guillaumin, M., & Gool, L.V. (2014). Food-101–mining discriminative components with random forests. In European conference on computer vision (pp. 446–461). Springer.
4. Bossard, L., Guillaumin, M., & Van Gool, L. (2014) Food-101—mining discriminative components with random forests. In European conference on computer vision
5. Chan, R., Rottmann, M., & Gottschalk, H. (2021). Entropy maximization and meta classification for out-of-distribution detection in semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 5128–5137).