An Empirical Evaluation of Feature Selection Stability and Classification Accuracy
Author:
BÜYÜKKEÇECİ MustafaORCID, OKUR Mehmet Cudi1ORCID
Abstract
The performance of inductive learners can be negatively affected by high-dimensional datasets. To address this issue, feature selection methods are used. Selecting relevant features and reducing data dimensions is essential for having accurate machine learning models. Stability is an important criterion in feature selection. Stable feature selection algorithms maintain their feature preferences even when small variations exist in the training set. Studies have emphasized the importance of stable feature selection, particularly in cases where the number of samples is small and the dimensionality is high. In this study, we evaluated the relationship between stability measures, as well as, feature selection stability and classification accuracy, using the Pearson Product-Moment Correlation Coefficient. We conducted an extensive series of experiments using five filter and two wrapper feature selection methods, three classifiers for subset and classification performance evaluation, and eight real-world datasets taken from two different data repositories. We measured the stability of feature selection methods using a total of twelve stability metrics. Based on the results of correlation analyses, we have found that there is a lack of substantial evidence supporting a linear relationship between feature selection stability and classification accuracy. However, a strong positive correlation has been observed among several stability metrics.
Publisher
Gazi University Journal of Science
Subject
Multidisciplinary,General Engineering
Reference40 articles.
1. [1] Loscalzo, S., Yu, L., Ding, C., “Consensus group based stable feature selection”, Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 567-576, Paris, France, (2009). 2. [2] Kalousis, A., Prados, J., Hilario, M., “Stability of feature selection algorithms: a study on high-dimensional spaces”, Knowledge and Information Systems 12: 95-116, (2007). 3. [3] Nogueira, S., “Quantifying the stability of feature selection”, Ph.D. Thesis, University of Manchester, Manchester, United Kingdom, 21-67, (2018). 4. [4] Wang, H., Khoshgoftaaar, T.M., Liang, Q., “Stability and classification performance of feature selection techniques”, 2011 10th International Conference on Machine Learning and Applications and Workshops, 151-156, Honolulu, HI, USA, (2011). 5. [5] Drotár, P., Smékal, Z., “Stability of feature selection algorithms and its influence on prediction accuracy in biomedical datasets”, TENCON 2014 - 2014 IEEE Region 10 Conference, 1-5, Bangkok, Thailand, (2014).
|
|