Author:
Röchner Philipp,Marques Henrique O.,Campello Ricardo J. G. B.,Zimek Arthur
Abstract
AbstractAn outlier probability is the probability that an observation is an outlier. Typically, outlier detection algorithms calculate real-valued outlier scores to identify outliers. Converting outlier scores into outlier probabilities increases the interpretability of outlier scores for domain experts and makes outlier scores from different outlier detection algorithms comparable. Although several transformations to convert outlier scores to outlier probabilities have been proposed in the literature, there is no common understanding of good outlier probabilities and no standard approach to evaluate outlier probabilities. We require that good outlier probabilities be sharp, refined, and calibrated. To evaluate these properties, we adapt and propose novel measures that use ground-truth labels indicating which observation is an outlier or an inlier. The refinement and calibration measures partition the outlier probabilities into bins or use kernel smoothing. Compared to the evaluation of probability in supervised learning, several aspects are relevant when evaluating outlier probabilities, mainly due to the imbalanced and often unsupervised nature of outlier detection. First, stratified and weighted measures are necessary to evaluate the probabilities of outliers well. Second, the joint use of the sharpness, refinement, and calibration errors makes it possible to independently measure the corresponding characteristics of outlier probabilities. Third, equiareal bins, where the product of observations per bin times bin length is constant, balance the number of observations per bin and bin length, allowing accurate evaluation of different outlier probability ranges. Finally, we show that good outlier probabilities, according to the proposed measures, improve the performance of the follow-up task of converting outlier probabilities into labels for outliers and inliers.
Funder
Danmarks Frie Forskningsfond,Denmark
Johannes Gutenberg-Universität Mainz
Publisher
Springer Science and Business Media LLC
Reference62 articles.
1. Achtert E, Kriegel H, Reichert L, et al. (2010) Visual evaluation of outlier detection models. In: DASFAA (2), Lecture Notes in Computer Science, vol 5982. Springer, pp 396–399
2. Arrieta-Ibarra I, Gujral P, Tannen J et al (2022) Metrics of calibration for probabilistic predictions. J Mach Learn Res 23(1):15886–15940
3. Barnett V, Lewis T et al (1994) Outliers in statistical data, vol 3. Wiley, New York
4. Bauder RA, Khoshgoftaar TM (2017) Estimating outlier score probabilities. In: 2017 IEEE International Conference on Information Reuse and Integration (IRI), IEEE, pp 559–568
5. Blasiok J, Nakkiran P (2023) Smooth ECE: Principled reliability diagrams via kernel smoothing. In: The Twelfth International Conference on Learning Representations