Author:
Lima Eliana,Davies Peers,Kaler Jasmeet,Lovatt Fiona,Green Martin
Abstract
AbstractVariable selection in inferential modelling is problematic when the number of variables is large relative to the number of data points, especially when multicollinearity is present. A variety of techniques have been described to identify ‘important’ subsets of variables from within a large parameter space but these may produce different results which creates difficulties with inference and reproducibility. Our aim was evaluate the extent to which variable selection would change depending on statistical approach and whether triangulation across methods could enhance data interpretation. A real dataset containing 408 subjects, 337 explanatory variables and a normally distributed outcome was used. We show that with model hyperparameters optimised to minimise cross validation error, ten methods of automated variable selection produced markedly different results; different variables were selected and model sparsity varied greatly. Comparison between multiple methods provided valuable additional insights. Two variables that were consistently selected and stable across all methods accounted for the majority of the explainable variability; these were the most plausible important candidate variables. Further variables of importance were identified from evaluating selection stability across all methods. In conclusion, triangulation of results across methods, including use of covariate stability, can greatly enhance data interpretation and confidence in variable selection.
Publisher
Springer Science and Business Media LLC
Reference36 articles.
1. Hastie, T., Tibshirani, R. & Wainwright, M. Statistical learning with sparsity: the lasso and generalizations. (Chapman and Hall/CRC (2015).
2. Wasserman, L. & Roeder, K. High dimensional variable selection. Ann. Stat. 1, 2178–2201 (2009).
3. Sirimongkolkasem, T. & Drikvandi, R. On Regularisation methods for analysis of high dimensional data. Ann. Data Sci. 6, 737–763 (2019).
4. Liu, J. Y., Zhong, W. & Li, R. Z. A selective overview of feature screening for ultrahigh-dimensional data. Sci. China Math. 58, 2033–2054 (2015).
5. Desboulets, L. D. D. A review on variable selection in regression analysis. Econometrics 6, 1–27 (2018).
Cited by
17 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献