AFS-BM: Enhancing Model Performance through Adaptive Feature Selection with Binary Masking

Author:

Turali Mehmet Y.1,Lorasdagi Mehmet E.1,Koc Ali T.1,Kozat Suleyman S.1

Affiliation:

1. Bilkent University

Abstract

Abstract We study the problem of feature selection in general machine learning (ML) context, which is one of the most critical subjects in the field. Although, there exist many feature selection methods, however, these methods face challenges such as scalability, managing high-dimensional data, dealing with correlated features, adapting to variable feature importance, and integrating domain knowledge. To this end, we introduce the ``Adaptive Feature Selection with Binary Masking" (AFS-BM) which remedies these problems. AFS-BM achieves this by joint optimization for simultaneous feature selection and model training. In particular, we do the joint optimization and binary masking to continuously adapt the set of features and model parameters during the training process. This approach leads to significant improvements in model accuracy and a reduction in computational requirements. We provide an extensive set of experiments where we compare AFS-BM with the established feature selection methods using well-known datasets from real-life competitions. Our results show that AFS-BM makes significant improvement in terms of accuracy and requires significantly less computational complexity. This is due to AFS-BM's ability to dynamically adjust to the changing importance of features during the training process, which an important contribution to the field. We openly share our code for the replicability of our results and to facilitate further research.

Publisher

Research Square Platform LLC

Reference43 articles.

1. Richard E. Bellman (1961) Adaptive Control Processes: A Guided Tour. Princeton University Press, Princeton, 2024-01-14, https://doi.org/10.1515/9781400874668, 9781400874668, doi:10.1515/9781400874668

2. Bishop, Christopher M. (2006) Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg, 0387310738

3. Guyon, Isabelle and Elisseeff, Andr\'{e} (2003) An Introduction to Variable and Feature Selection. JMLR.org, 26, Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods., 3/1/2003, 1532-4435, 1157 –1182, null, 3, J. Mach. Learn. Res.

4. Guyon, Isabelle and Weston, Jason and Barnhill, Stephen and Vapnik, Vladimir (2002) Gene Selection for Cancer Classification using Support Vector Machines. Machine Learning 46(1): 389--422 https://doi.org/10.1023/A:1012487302797, DNA micro-arrays now permit scientists to screen thousands of genes simultaneously and determine whether those genes are active, hyperactive or silent in normal or cancerous tissue. Because these new micro-array devices generate bewildering amounts of raw data, new analytical methods must be developed to sort out whether cancer tissues have distinctive signatures of gene expression over normal tissues or other types of cancer tissues., https://doi.org/10.1023/A:1012487302797, 1573-0565, 01, Jan

5. Spyros Makridakis and Evangelos Spiliotis and Vassilios Assimakopoulos (2020) {The M4 Competition: 100,000 time series and 61 forecasting methods}. International Journal of Forecasting 36(1): 54--74 https://doi.org/https://doi.org/10.1016/j.ijforecast.2019.04.014, The M4 Competition follows on from the three previous M competitions, the purpose of which was to learn from empirical evidence both how to improve the forecasting accuracy and how such learning could be used to advance the theory and practice of forecasting. The aim of M4 was to replicate and extend the three previous competitions by: (a) significantly increasing the number of series, (b) expanding the number of forecasting methods, and (c) including prediction intervals in the evaluation process as well as point forecasts. This paper covers all aspects of M4 in detail, including its organization and running, the presentation of its results, the top-performing methods overall and by categories, its major findings and their implications, and the computational requirements of the various methods. Finally, it summarizes its main conclusions and states the expectation that its series will become a testing ground for the evaluation of new methods and the improvement of the practice of forecasting, while also suggesting some ways forward for the field., Forecasting competitions, M competitions, Forecasting accuracy, Prediction intervals, Time series methods, Machine learning methods, Benchmarking methods, Practice of forecasting, M4 Competition, https://www.sciencedirect.com/science/article/pii/S0169207019301128, 0169-2070

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3