Author:
Raben Timothy G.,Lello Louis,Widen Erik,Hsu Stephen D. H.
Abstract
AbstractIn this paper we characterize the performance of linear models trained via widely-used sparse machine learning algorithms. We build polygenic scores and examine performance as a function of training set size, genetic ancestral background, and training method. We show that predictor performance is most strongly dependent on size of training data, with smaller gains from algorithmic improvements. We find that LASSO generally performs as well as the best methods, judged by a variety of metrics. We also investigate performance characteristics of predictors trained on one genetic ancestry group when applied to another. Using LASSO, we develop a novel method for projecting AUC and correlation as a function of data size (i.e., for new biobanks) and characterize the asymptotic limit of performance. Additionally, for LASSO (compressed sensing) we show that performance metrics and predictor sparsity are in agreement with theoretical predictions from the Donoho-Tanner phase transition. Specifically, a future predictor trained in the Taiwan Precision Medicine Initiative for asthma can achieve an AUC of $$0.63_{(0.02)}$$
0
.
63
(
0.02
)
and for height a correlation of $$0.648_{(0.009)}$$
0
.
648
(
0.009
)
for a Taiwanese population. This is above the measured values of $$0.61_{(0.01)}$$
0
.
61
(
0.01
)
and $$0.631_{(0.008)}$$
0
.
631
(
0.008
)
, respectively, for UK Biobank trained predictors applied to a European population.
Publisher
Springer Science and Business Media LLC
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献