Author:
Menze Bjoern H,Kelm B Michael,Masuch Ralf,Himmelreich Uwe,Bachert Peter,Petrich Wolfgang,Hamprecht Fred A
Abstract
Abstract
Background
Regularized regression methods such as principal component or partial least squares regression perform well in learning tasks on high dimensional spectral data, but cannot explicitly eliminate irrelevant features. The random forest classifier with its associated Gini feature importance, on the other hand, allows for an explicit feature elimination, but may not be optimally adapted to spectral data due to the topology of its constituent classification trees which are based on orthogonal splits in feature space.
Results
We propose to combine the best of both approaches, and evaluated the joint use of a feature selection based on a recursive feature elimination using the Gini importance of random forests' together with regularized classification methods on spectral data sets from medical diagnostics, chemotaxonomy, biomedical analytics, food science, and synthetically modified spectral data. Here, a feature selection using the Gini feature importance with a regularized classification by discriminant partial least squares regression performed as well as or better than a filtering according to different univariate statistical tests, or using regression coefficients in a backward feature elimination. It outperformed the direct application of the random forest classifier, or the direct application of the regularized classifiers on the full set of features.
Conclusion
The Gini importance of the random forest provided superior means for measuring feature relevance on spectral data, but – on an optimal subset of features – the regularized classifiers might be preferable over the random forest classifier, in spite of their limitation to model linear dependencies only. A feature selection based on Gini importance, however, may precede a regularized linear classification to identify this optimal subset of features, and to earn a double benefit of both dimensionality reduction and the elimination of noise from the classification task.
Publisher
Springer Science and Business Media LLC
Subject
Applied Mathematics,Computer Science Applications,Molecular Biology,Biochemistry,Structural Biology
Reference43 articles.
1. Guyon I, Elisseeff A: An introduction to variable and feature selection. J Mach Learn Res 2003, 3: 1157–82. 10.1162/153244303322753616
2. Stone M, J R, Brooks Continuum regression: Cross-validated sequentially constructed prediction embracing ordinary least squares, partial least squares and principal components regression. J Roy Stat Soc B (Meth) 1990, 52: 237–269.
3. Frank IE, Friedman JH: A statistical view of some Chemometrics regression tools. Technometrics 1993, 35: 109–135. 10.2307/1269656
4. Bylesjö M, Rantalainen M, Nicholson JK, Holmes E, Trygg J: K-OPLS package: Kernel-based orthogonal projections to latent structures for prediction and interpretation in feature space. BMC Bioinformatics 2008, 9: 106. 10.1186/1471-2105-9-106
5. Westad F, Martens H: Variable selection in near infrared spectroscopy based on significance testing in partial least squares regression. J Near Infrared Spectrosc 2000, 117: 117–124. 10.1255/jnirs.271
Cited by
863 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献