Abstract
Abstract
We study the problem of feature selection in general machine learning (ML) context, which is one of the most critical subjects in the field. Although, there exist many feature selection methods, however, these methods face challenges such as scalability, managing high-dimensional data, dealing with correlated features, adapting to variable feature importance, and integrating domain knowledge. To this end, we introduce the ``Adaptive Feature Selection with Binary Masking" (AFS-BM) which remedies these problems. AFS-BM achieves this by joint optimization for simultaneous feature selection and model training. In particular, we do the joint optimization and binary masking to continuously adapt the set of features and model parameters during the training process. This approach leads to significant improvements in model accuracy and a reduction in computational requirements. We provide an extensive set of experiments where we compare AFS-BM with the established feature selection methods using well-known datasets from real-life competitions. Our results show that AFS-BM makes significant improvement in terms of accuracy and requires significantly less computational complexity. This is due to AFS-BM's ability to dynamically adjust to the changing importance of features during the training process, which an important contribution to the field. We openly share our code for the replicability of our results and to facilitate further research.
Publisher
Research Square Platform LLC
Reference43 articles.
1. Richard E. Bellman (1961) Adaptive Control Processes: A Guided Tour. Princeton University Press, Princeton, 2024-01-14, https://doi.org/10.1515/9781400874668, 9781400874668, doi:10.1515/9781400874668
2. Bishop, Christopher M. (2006) Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg, 0387310738
3. Guyon, Isabelle and Elisseeff, Andr\'{e} (2003) An Introduction to Variable and Feature Selection. JMLR.org, 26, Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods., 3/1/2003, 1532-4435, 1157 –1182, null, 3, J. Mach. Learn. Res.
4. Guyon, Isabelle and Weston, Jason and Barnhill, Stephen and Vapnik, Vladimir (2002) Gene Selection for Cancer Classification using Support Vector Machines. Machine Learning 46(1): 389--422 https://doi.org/10.1023/A:1012487302797, DNA micro-arrays now permit scientists to screen thousands of genes simultaneously and determine whether those genes are active, hyperactive or silent in normal or cancerous tissue. Because these new micro-array devices generate bewildering amounts of raw data, new analytical methods must be developed to sort out whether cancer tissues have distinctive signatures of gene expression over normal tissues or other types of cancer tissues., https://doi.org/10.1023/A:1012487302797, 1573-0565, 01, Jan
5. Spyros Makridakis and Evangelos Spiliotis and Vassilios Assimakopoulos (2020) {The M4 Competition: 100,000 time series and 61 forecasting methods}. International Journal of Forecasting 36(1): 54--74 https://doi.org/https://doi.org/10.1016/j.ijforecast.2019.04.014, The M4 Competition follows on from the three previous M competitions, the purpose of which was to learn from empirical evidence both how to improve the forecasting accuracy and how such learning could be used to advance the theory and practice of forecasting. The aim of M4 was to replicate and extend the three previous competitions by: (a) significantly increasing the number of series, (b) expanding the number of forecasting methods, and (c) including prediction intervals in the evaluation process as well as point forecasts. This paper covers all aspects of M4 in detail, including its organization and running, the presentation of its results, the top-performing methods overall and by categories, its major findings and their implications, and the computational requirements of the various methods. Finally, it summarizes its main conclusions and states the expectation that its series will become a testing ground for the evaluation of new methods and the improvement of the practice of forecasting, while also suggesting some ways forward for the field., Forecasting competitions, M competitions, Forecasting accuracy, Prediction intervals, Time series methods, Machine learning methods, Benchmarking methods, Practice of forecasting, M4 Competition, https://www.sciencedirect.com/science/article/pii/S0169207019301128, 0169-2070