Abstract
To quantify the optimum performance for classification tasks, the Shannon mutual information is a natural information-theoretic metric, as it is directly related to the probability of error. The data produced by many imaging systems can be modeled by mixture distributions. The mutual information between mixture data and the class label does not have an analytical expression nor any efficient computational algorithms. We introduce a variational upper bound, a lower bound, and three approximations, all employing pair-wise divergences between mixture components. We compare the new bounds and approximations with Monte Carlo stochastic sampling and bounds derived from entropy bounds. To conclude, we evaluate the performance of the bounds and approximations through numerical simulations.
Subject
Computer Vision and Pattern Recognition,Atomic and Molecular Physics, and Optics,Electronic, Optical and Magnetic Materials