Abstract
AbstractIn many contexts, customized and weighted classification scores are designed in order to evaluate the goodness of the predictions carried out by neural networks. However, there exists a discrepancy between the maximization of such scores and the minimization of the loss function in the training phase. In this paper, we provide a complete theoretical setting that formalizes weighted classification metrics and then allows the construction of losses that drive the model to optimize these metrics of interest. After a detailed theoretical analysis, we show that our framework includes as particular instances well-established approaches such as classical cost-sensitive learning, weighted cross entropy loss functions and value-weighted skill scores.
Funder
Università degli Studi di Padova
Publisher
Springer Science and Business Media LLC
Reference30 articles.
1. Aurelio, Y.S., de Almeida, G.M., Castro, C.L., de Pádua Braga, A.: Learning from imbalanced data sets with weighted cross-entropy function. Neural Process. Lett. 50, 1937–1949 (2019)
2. Benaroya, H., Han, S.M.: Probability models in engineering and science, vol. 193 of Mechanical Engineering,. CRC/Taylor & Francis, Boca Raton, FL (2005)
3. Elkan, C.: The foundations of cost-sensitive learning, in International joint conference on artificial intelligence, vol. 17, Lawrence Erlbaum Associates Ltd, pp. 973–978 (2001)
4. Fernández, A., García, S., Galar, M., Prati, R.C., Krawczyk, B., Herrera, F., Fernández, A., García, S., Galar, M., Prati, R.C. et al. (2018) Cost-sensitive learning, Learning from Imbalanced Data Sets, pp. 63–78
5. Good, I.J.: Rational decisions. J. Roy. Statist. Soc. Ser. B 14, 107–114 (1952)