Abstract
AbstractThe widespread applications of high-throughput sequencing technology have produced a large number of publicly available gene expression datasets. However, due to the gene expression datasets have the characteristics of small sample size, high dimensionality and high noise, the application of biostatistics and machine learning methods to analyze gene expression data is a challenging task, such as the low reproducibility of important biomarkers in different studies. Meta-analysis is an effective approach to deal with these problems, but the current methods have some limitations. In this paper, we propose the meta-analysis based on three nonconvex regularization methods, which are L1/2 regularization (meta-Half), Minimax Concave Penalty regularization (meta-MCP) and Smoothly Clipped Absolute Deviation regularization (meta-SCAD). The three nonconvex regularization methods are effective approaches for variable selection developed in recent years. Through the hierarchical decomposition of coefficients, our methods not only maintain the flexibility of variable selection and improve the efficiency of selecting important biomarkers, but also summarize and synthesize scientific evidence from multiple studies to consider the relationship between different datasets. We give the efficient algorithms and the theoretical property for our methods. Furthermore, we apply our methods to the simulation data and three publicly available lung cancer gene expression datasets, and compare the performance with state-of-the-art methods. Our methods have good performance in simulation studies, and the analysis results on the three publicly available lung cancer gene expression datasets are clinically meaningful. Our methods can also be extended to other areas where datasets are heterogeneous.
Publisher
Springer Science and Business Media LLC
Reference74 articles.
1. Barrett, T. et al. Ncbi geo: archive for functional genomics data sets—update. Nucleic acids research 41, D991–D995 (2012).
2. Tibshirani, R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological) 58, 267–288 (1996).
3. Fu, W. Penalized regressions: The bridge versus the lasso. Journal of Computational and Graphical Statistics 7, 397–416 (1998).
4. Xu, Z. B., Zhang, H., Wang, Y., Chang, X. Y. & Liang, Y. l
1/2 regularization. Science China Information Sciences 53, 1159–1169 (2010).
5. Liang, Y. et al. Sparse logistic regression with a l
1/2 penalty for gene selection in cancer classification. BMC bioinformatics 14, 198 (2013).
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献